It is a striking fact from reverse mathematics that almost all theorems of countable and countably representable mathematics are equivalent to just five subsystems of second order arithmetic. The standard view is that the significance of these equivalences lies in the set existence principles that are necessary and sufficient to prove those theorems. In this article I analyse the role of set existence principles in reverse mathematics, and argue that they are best understood as closure conditions on the powerset of the natural numbers.
This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes.
Weak supplementation says that if x is a proper part of y, then y has a proper part that doesn’t overlap x. Suppose that we are impressed by standard counterexamples to weak supplementation like the following. …
Comparativism is the position that the fundamental doxastic state consists in comparative beliefs (e.g., believing p to be more likely than q), with partial beliefs (e.g., believing p to degree x) being grounded in and explained by patterns amongst comparative beliefs that exist under special conditions. In this paper, I develop a version of comparativism that originates with a suggestion made by Frank Ramsey in his ‘Probability and Partial Belief’ (1929). By means of a representation theorem, I show how this ‘Ramseyan comparativism’ can be used to weaken the (unrealistically strong) conditions required for probabilistic coherence that comparativists usually rely on, while still preserving enough structure to let us retain the usual comparativists’ account of quantitative doxastic comparisons.
The Kochen-Specker theorem is an important and subtle topic in the
foundations of quantum mechanics (QM). The theorem demonstrates the
impossibility of a certain type of interpretation of QM in terms of
hidden variables (HV) that naturally suggests itself when one begins
to consider the project of interpretating QM.We here present the
theorem/argument and the foundational discussion surrounding it at
different levels. The reader looking for a quick overview should read
the following sections and subsections: 1, 2, 3.1, 3.2, 4, and
6. Those who read the whole entry will find proofs of some non-trivial
claims in supplementary documents.
The following general attitude to mathematics seems plausible: standard claims, such as ‘there are infinitely many primes’ or ‘every consistent set of sentences has a model’, are true; nevertheless, if one rifles through the fundamental furniture of the universe, one will not find mathematical objects, such as numbers, sets or models. A natural way of making sense of this attitude is to augment it with the following thought: this is possible because such standard claims have paraphrases that make clear that their truth does not require the fundamental existence of such objects. This paper will draw out some surprising consequences of this general approach to mathematics—an approach that I call paraphrase anti-realism. These consequences concern the relationship between logical structure, on the one hand, and explanatory structure, on the other.
There is an ambiguity in the fundamental concept of deductive logic that went unnoticed until the middle of the 20th Century. Sorting it out has led to profound mathematical investigations with applications in complexity theory and computer science. The origins of this ambiguity and the history of its resolution deserve philosophical attention, because our understanding of logic stands to benefit from an appreciation of their details.
I discuss the problem of whether true contradictions of the form “x is P and not P ” might be the expression of an implicit relativization to distinct respects of application of one and the same predicate P . Priest rightly claims that one should not mistake true contradictions for an expression of lexical ambiguity. However, he primarily targets cases of homophony for which lexical meanings do not overlap. There exist more subtle forms of equivocation, such as the relation of privative opposition singled out by Zwicky and Sadock in their study of ambiguity. I argue that this relation, which is basically a relation of general to more specific, underlies the logical form of true contradictions. The generalization appears to be that all true contradictions really mean “x is P in some respects/to some extent, but not in all respects/not to all extent”. I relate this to the strict-tolerant account of vague predicates and outline a variant of the account to cover one-dimensional and multi-dimensional predicates.
We investigate the relative computability of exchangeable binary relational data when presented in terms of the distribution of an invariant measure on graphs, or as a graphon in either L or the cut distance. We establish basic computable equivalences, and show that L representations contain fundamentally more computable information than the other representations, but that 0 suffices to move between computable such representations. We show that 0 is necessary in general, but that in the case of random-free graphons, no oracle is necessary. We also provide an example of an L -computable random-free graphon that is not weakly isomorphic to any graphon with an a.e. continuous version.
Proof-theoretic semantics is an alternative to truth-condition
semantics. It is based on the fundamental assumption that the central
notion in terms of which meanings are assigned to certain expressions
of our language, in particular to logical constants, is that of
proof rather than truth. In this sense
proof-theoretic semantics is semantics in terms of proof . Proof-theoretic semantics also means the semantics of proofs,
i.e., the semantics of entities which describe how we arrive at certain
assertions given certain assumptions. Both aspects of proof-theoretic
semantics can be intertwined, i.e.
tion to perform in order to change a currently undesirable situation. The policymaker has at her disposal a team of experts, each with their own understanding of the causal dependencies between different factors contributing to the outcome. The policymaker has varying degrees of confidence in the experts’ opinions. She wants to combine their opinions in order to decide on the most effective intervention. We formally define the notion of an effective intervention, and then consider how experts’ causal judgments can be combined in order to determine the most effective intervention. We define a notion of two causal models being compatible, and show how compatible causal models can be combined. We then use it as the basis for combining experts causal judgments. We illustrate our approach on a number of real-life examples.
Øystein Linnebo and Richard Pettigrew () have recently developed a version of noneliminative mathematical structuralism based on Fregean abstraction principles. They argue that their theory of abstract structures proves a consistent version of the structuralist thesis that positions in abstract structures only have structural properties. They do this by defining a subset of the properties of positions in structures, so-called fundamental properties, and argue that all fundamental properties of positions are structural. In this paper, we argue that the structuralist thesis, even when restricted to fundamental properties, does not follow from the theory of structures that Linnebo and Pettigrew have developed. To make their account work, we propose a formal framework in terms of Kripke models that makes structural abstraction precise. The formal framework allows us to articulate a revised definition of fundamental properties, understood as intensional properties. Based on this revised definition, we show that the restricted version of the structuralist thesis holds.
We consider Geanakoplos and Polemarchakis’s generalization of Aumman’s famous result on “agreeing to disagree”, in the context of imprecise probability. The main purpose is to reveal a connection between the possibility of agreeing to disagree and the interesting and anomalous phenomenon known as dilation. We show that for two agents who share the same set of priors and update by conditioning on every prior, it is impossible to agree to disagree on the lower or upper probability of a hypothesis unless a certain dilation occurs. With some common topological assumptions, the result entails that it is impossible to agree not to have the same set of posterior probabilities unless dilation is present. This result may be used to generate sufficient conditions for guaranteed full agreement in the generalized Aumman-setting for some important models of imprecise priors, and we illustrate the potential with an agreement result involving the density ratio classes. We also provide a formulation of our results in terms of “dilation-averse” agents who ignore information about the value of a dilating partition but otherwise update by full Bayesian conditioning. Keywords: agreeing to disagree; common knowledge; dilation; imprecise probability.
According to a conventional view, there exists no common-cause model of quantum correlations satisfying locality requirements. In fact, Bell ’s inequality is derived from some locality conditions and the assumption that the common cause exists, and the violation of the inequality has been experimentally verified. On the other hand, some researchers argue that in the derivation of the inequality the existence of a common common-cause for multiple correlations is implicitly assumed, and that the assumption is unreasonably strong. According to their idea, what is necessary for explaining the quantum correlation is a common cause for each correlation. However, in this paper, we will show that in almost all entangled states we can not construct a local model that is consistent with quantum mechanical prediction even when we require only the existence of a common cause of each correlation.
In this paper I study an epistemic alternating offers game with a termination option, in which each rational and self-interested player expresses strategic caution – assigns positive probability to the event of opponent choosing the termination option – and internally coherent concession proportional beliefs – expects the opponent to be more likely to terminate the game after being offered a division of resource associated with a larger personal utility concession than after being offered a division of resource associated with a smaller personal utility concession. I define the epistemic conditions under which the players expressing concession proportional beliefs converge on a subjective equilibrium, as well as conditions under which the subjective equilibrium will yield an egalitarian distribution of bargaining gains .
Although the theory of the assertoric syllogism was Aristotle’s great invention, one which dominated logical theory for the succeeding two millenia, accounts of the syllogism evolved and changed over that time. Indeed, in the twentieth century, doctrines were attributed to Aristotle which lost sight of what Aristotle intended. One of these mistaken doctrines was the very form of the syllogism: that a syllogism consists of three propositions containing three terms arranged in four figures. Yet another was that a syllogism is a conditional proposition deduced from a set of axioms. There is even unclarity about what the basis of syllogistic validity consists in. Returning to Aristotle’s text, and reading it in the light of commentary from late antiquity and the middle ages, we find a coherent and precise theory which shows all these claims to be based on a misunderstanding and misreading.
An axiomatic theory of truth is a deductive theory of truth as a
primitive undefined predicate. Because of the liar and other
paradoxes, the axioms and rules have to be chosen carefully in order
to avoid inconsistency. Many axiom systems for the truth predicate
have been discussed in the literature and their respective properties
been analysed. Several philosophers, including many deflationists, have endorsed axiomatic theories of truth in their
accounts of truth. The logical properties of the formal theories are
relevant to various philosophical questions, such as questions about
the ontological status of properties, Gödel’s theorems,
truth-theoretic deflationism, eliminability of semantic notions and
the theory of meaning.
An increasingly popular account of logic, Anti-Exceptionalism, views logic as similar to, and continuous with, other scientific theories. It thus treats revision of logic analogously to revision of scientific theories, applying familiar abductive standards of scientific theory choice to the case of logic. We should, that is, move from one logical theory L to another L when L does “better” than L in terms of theoretical virtues like: ...simplicity, ontological leanness (Occam’s razor), explanatory power, a low degree of ad hocness, unity, [and] fruitfulness. (Priest 2006: 135) It’s intended to explain rational change of logic; nothing so detailed is needed to explain vacillating flirtations we might have with one logic or another. Abductive methodology is supposed to provide justification for moving from one logic to another. One whole body of logic, that is: the particular and common version of this methodology I’m here interested in isn’t aimed at settling whether we should revise any particular logical principle.
We give a probabilistic justification of the shape of one of the probability weighting functions used in Prospect Theory. To do so, we use an idea recently introduced by Herzog and Hertwig (2014). Along the way we also suggest a new method for the aggregation of probabilities using statistical distances.
The mereological predicate ‘is part of’ can be used to define the predicate ‘is identical with’. I argue that this entails that mereological theories can be ideologically simpler than nihilistic theories that do not use the notion of parthood—contrary to what has been argued by Ted Sider. Moreover, if one accepts an extensional mereology, there are good philosophical reasons apart from ideological simplicity to give a mereological definition of identity.
Consider the principle that for a given agent S, and any proposition p, it is metaphysically possible that S is thinking p, and p alone, at time t. According to philosophical folklore, this principle cannot be true, despite its initial appeal, because there are more propositions than possible worlds: the principle would require a different possible world to witness the thinking of each proposition, and there simply aren’t enough possible worlds to go around. Some theorists have taken comfort in the thought that, when taken in conjunction with facts about human psychology, the principle was not on particularly firm footing to begin with: most propositions are far too complicated for any human to grasp, much less think uniquely.
Conditional probability is one of the central concepts in probability theory. Some notion of conditional probability is part of every interpretation of probability. The basic mathematical fact about conditional probability is that p(A|B) = p(A ∧ B)/p(B) where this is defined. However, while it has been typical to take this as a definition or analysis of conditional probability, some (perhaps most prominently Hajek (2003)) have argued that conditional probability should instead be taken as the primitive notion, so that this formula is at best coextensive, and at worst sometimes gets it wrong. Section 1 of this article considers the notion of conditional probability, and the two main sorts of arguments that it should be taken as primitive, as well as some mathematical principles that have been alleged to be important for it. Sections 2 and 3 then describe the two main competing mathematical formulations of conditional probability.
In a recent paper, Wigglesworth claims that syntactic criteria of theoretical equivalence are not appropriate for settling questions of equivalence between logical theories, since such criteria judge classical and intuitionistic logic to be equivalent; he concludes that logicians should use semantic criteria instead. However, this is an artefact of the particular syntactic criterion chosen, which is an implausible criterion of theoretical equivalence (even in the non-logical case). Correspondingly, there is nothing to suggest that a more plausible syntactic criterion should not be used to settle questions of equivalence between different logical theories; such a criterion (which may already be found in the literature) is exhibited and shown to judge classical and intuitionistic logic to be inequivalent.
This paper proposes a reading of the history of equivalence in mathematics. The paper has two main parts. The first part focuses on a relatively short historical period when the notion of equivalence is about to be decontextualized, but yet, has no commonly agreed-upon name. The method for this part is rather straightforward: following the clues left by the others for the ‘first’ modern use of equivalence. The second part focuses on a relatively long historical period when equivalence is experienced in context. The method for this part is to strip the ideas from their set-theoretic formulations and methodically examine the variations in the ways equivalence appears in some prominent historical texts. The paper reveals several critical differences in the conceptions of equivalence at different points in history that are at variance with the standard account of the mathematical notion of equivalence encompassing the concepts of equivalence relation and equivalence class.
One of the critical problems with the classical philosophy of science is that it has not been quantitative in the past. But today the modern quantitative theory of information gives us the mathematical tools that are needed to make philosophy quantitative for the first time. A quantitative philosophy of science can provide vital insights into critical scientific questions ranging from the nature and properties of a Theory of Everything (TOE) in physics to the quantitative implications of Goedel’s celebrated incompleteness theorem for mathematics and physics. It also provides us with something that was conspicuously lacking in Kuhn’s famous book (1962) that introduced the idea of on paradigm shifts: a precise definition of a paradigm. This paper will begin to investigate these and other philosophical implications of the modern quantitative theory of information.
If it could be shown that one of Gentzen’s consistency proofs for pure number theory could be shown to be finitistically acceptable, an important part of Hilbert’s program would be vindicated. This paper focuses on whether the transfinite induction on ordinal notations needed for Gentzen’s second proof can be finitistically justified. In particular, the focus is on Takeuti’s purportedly finitistically acceptable proof of the well-ordering of ordinal notations in Cantor normal form. The paper begins with a historically informed discussion of finitism and its limits, before introducing Gentzen and Takeuti’s respective proofs. The rest of the paper is dedicated to investigating the finitistic acceptability of Takeuti’s proof, including a small but important fix to that proof. That discussion strongly suggests that there is a philosophically interesting finitist standpoint that Takeuti’s proof, and therefore Gentzen’s proof, conforms to.
Totality statements—i.e. those of the form ‘α and that’s all’—play an important role in metaphysics. They are introduced to allow adequate formulations of overarching theories such as physicalism. One might initially be tempted to state this as: physics entails everything. That is, if P is a complete physical description of the world, then P entails every truth. The problem is that it seems clear that physics will not entail everything. For example, it will not decide whether there are demons. But then either the claim that there are demons or its negation will be a truth that physics fails to entail. This seems to be a problem with the suggested formulation, rather than with the underlying idea of physicalism, however. For suppose that there are in fact no demons. The idea behind physicalism is that the world is wholly physical, and the fact that physics fails to entail that there are no demons hardly seems to count against this. We should thus give a better formulation—and that is where totality statements come in. For while a complete physical description P will not entail that there are no demons, the claim that P and that’s all—typically regimented as TP, for a ‘totality operator’ T—does seem to entail that there are no demons. For if there were, P would not be all there is to the world.
Several theists, including Linda Zagzebski, have claimed that theism is somehow committed to nonvacuism about counterpossibles. Even though Zagzebski herself has rejected vacuism, she has offered an argument in favour of it, which Edward Wierenga has defended as providing strong support for vacuism that is independent of the orthodox semantics for counterfactuals, mainly developed by David Lewis and Robert Stalnaker. In this paper I show that argument to be sound only relative to the orthodox semantics, which entails vacuism, and give an example of a semantics for counterfactuals countenancing impossible worlds for which it fails.
Quantum Mechanics and the Dodecahedron
This is an expanded version of my G+ post, which was a watered-down version of Greg Egan’s G+ post and the comments on that. I’ll start out slow, and pick up speed as I go. …
The 600-Cell (Part 3)
There are still a few more things I want to say about the 600-cell. Last time I described the ‘compound of five 24-cells’. David Richter built a model of this, projected from 4 dimensions down to 3:
It’s nearly impossible to tell from this picture, but it’s five 24-cells inscribed in the 600-cell, with each vertex of the 600-cell being the vertex of just one of these five 24-cells. …