This paper aims to provide modal foundations for mathematical platonism. I examine Hale and Wright’s (2009) objections to the merits and need, in the defense of mathematical platonism and its epistemology, of the thesis of Necessitism. In response to Hale and Wright’s objections to the role of epistemic and metaphysical modalities in providing justification for both the truth of abstraction principles and the success of mathematical predicate reference, I examine the Necessitist commitments of the abundant conception of properties endorsed by Hale and Wright and examined in Hale (2013a); examine cardinality issues which arise depending on whether Necessitism is accepted at first- and higher-order; and demonstrate how a multi-dimensional intensional approach to the epistemology of mathematics, augmented with Necessitism, is consistent with Hale and Wright’s notion of there being epistemic entitlement rationally to trust that abstraction principles are true. Epistemic and metaphysical modality may thus be shown to play a constitutive role in vindicating the reality of mathematical objects and truth, and in explaining our possible knowledge thereof.
Loop quantum gravity has formalized a robust scheme in resolving classical singularities in a variety of symmetry-reduced models of gravity. In this essay, we demonstrate that the same quantum correction which is crucial for singularity resolution is also responsible for the phenomenon of signature change in these models, whereby one effectively transitions from a ‘fuzzy’ Euclidean space to a Lorentzian space-time in deep quantum regimes. As long as one uses a quantization scheme which respects covariance, holonomy corrections from loop quantum gravity generically leads to non-singular signature change, thereby giving an emergent notion of time in the theory. Robustness of this mechanism is established by comparison across large class of midisuperspace models and allowing for diverse quantization ambiguities. Conceptual and mathematical consequences of such an underlying quantum-deformed space-time are briefly discussed.
In his paper, ‘Regarding the ‘Hole Argument”, Weatherall suggests that models of general relativity related by a hole diffeomorphism must be regarded as being physically equivalent. At a later stage in the paper, however, he also argues that there is a sense in which two such models may be regarded as being empirically distinct—a fortiori physically distinct. We attempt to delineate the logic behind these two prima facie contradictory claims. We argue that the latter sense rests upon a misunderstanding of the import of shift arguments in the foundations of spacetime theories.
According to Michael Friedman’s theory of explanation, a law X explains laws Y1 2, ... , Yn precisely when X unifies the Y’s, where unification is understood in terms of reducing the number of independently acceptable laws. Philip Kitcher criticized Friedman’s theory but did not analyze the concept of independent acceptability. Here we show that Kitcher’s objection can be met by modifying an element in Friedman’s account. In addition, we argue that there are serious objections to the use that Friedman makes of the concept of independent acceptability.
It’s often unclear what we ought to do. Much of the time this is because it’s unclear what ma%ers. Suppose, for instance, that we’re poultry farmers wondering whether we ought to put our chickens in cages or let them roam free. We know that we’ll make more profit if we put them in cages but also that they’ll suffer more if we do. Still, if we want to know what we ought to do, we need to know whether minimizing the suffering of our chickens is something that ma%ers, and, if so, how much it ma%ers in comparison to our maximizing profits.
There are good reasons to believe that the classical structure of space-time, as it appears in general relativity, breaks down at small length scales of the order of the Planck scale . This poses a problem in particular for any theory of quantum gravity, which should extend to such short length scales. Assuming that the classical concept of space-time (described as a manifold) is no longer viable as a fundamental concept in such a theory, one needs to explain how it emerges as an approximate concept in the appropriate (long distance) limit.
In a series of recent papers, two of which appeared in this journal, a group of philosophers, physicists, and climate scientists have argued that something they call the ‘hawkmoth effect’ poses insurmountable difficulties for those who would use nonlinear models, including climate simulation models, to make quantitative predictions or to produce ‘decision-relevant probabilites.’ Such a claim, if it were true, would undermine much of climate science, among other things. Here, we examine the two lines of argument the group has used to support their claims. The first comes from a set of results in dynamical systems theory associated with the concept of ‘structural stability.’ The second relies on a mathematical demonstration of their own, using the logistic equation, that they present using a hypothetical scenario involving two apprentices of Laplace’s omniscient demon. We prove two theorems that are relevant to their claims, and conclude that both of these lines of argument fail. There is nothing out there that comes close to matching the characteristics this group attributes to the ‘hawkmoth effect.’
A counterpossible conditional is a counterfactual with an impossible antecedent. Common sense delivers the view that some such conditionals are true, and some are false. In recent publications, Timothy Williamson has defended the view that all are true. In this paper we defend the common sense view against Williamson’s objections.
On a very intuitive way of thinking, if it is already determined that some event will happen, then there is no non-trivial chance (no chance between 0 and 1) of it failing to happen, and if it is already determined that some event will not happen, then there is no non-trivial chance of it happening. On this way of thinking, it does not make sense to claim both that it is already determined that Always Dreaming will win this year’s Kentucky Derby and that the chance of Classic Empire winning instead is 1/2.
Omnipotence is maximal power. Maximal greatness (or perfection)
includes omnipotence. According to traditional Western theism, God is
maximally great (or perfect), and therefore is omnipotent. Omnipotence
seems puzzling, even paradoxical, to many philosophers. They wonder,
for example, whether God can create a spherical cube, or make a stone
so massive that he cannot move it. Is there a consistent analysis of
omnipotence? What are the implications of such an analysis for the
nature of God?
We reflect on the information paradigm in quantum and gravitational physics and on how it may assist us in approaching quantum gravity. We begin by arguing, using a reconstruction of its formalism, that quantum theory can be regarded as a universal framework governing an observer’s acquisition of information from physical systems taken as information carriers. We continue by observing that the structure of spacetime is encoded in the communication relations among observers and more generally the information flow in spacetime. Combining these insights with an information-theoretic Machian view, we argue that the quantum architecture of spacetime can operationally be viewed as a locally finite network of degrees of freedom exchanging information. An advantage – and simultaneous limitation – of an informational perspective is its quasi-universality, i.e. quasi-independence of the precise physical incarnation of the underlying degrees of freedom. This suggests to exploit these informational insights to develop a largely microphysics independent top-down approach to quantum gravity to complement extant bottom-up approaches by closing the scale gap between the unknown Planck scale physics and the familiar physics of quantum (field) theory and general relativity systematically from two sides. While some ideas have been pronounced before in similar guise and others are speculative, the way they are strung together and justified is new and supports approaches attempting to derive emergent spacetime structures from correlations of quantum degrees of freedom.
According to the Fine-Tuning Argument (FTA), the existence of life in our universe confirms the Multiverse Hypothesis (HM). A standard objection to FTA is that it violates the Requirement of Total Evidence (RTE). I argue that RTE should be rejected in favor of the Predesignation Requirement, according to which, in assessing the outcome of a probabilistic process, we should only use evidence characterizable in a manner available prior to observing the outcome. This produces the right verdicts in some simple cases in which RTE leads us astray; and, when applied to FTA, it shows that our evidence does confirm HM.
The paper has a twofold aim. On the one hand, it provides what appears to be the first game-theoretic modeling of Napole´on’s last campaign, which ended dramatically on June 18, 1815, at Waterloo. It is specifically concerned with the decision Napole´on made on June 17, 1815, to detach part of his army and send it against the Prussians, whom he had defeated, though not destroyed, on June 16 at Ligny. Military strategists and historians agree that this decision was crucial but disagree about whether it was rational. Hypothesizing a zero-sum game between Napole´on and Blu¨cher, and computing its solution, we show that dividing his army could have been a cautious strategy on Napole´on’s part, a conclusion which runs counter to the charges of misjudgment commonly heard since Clausewitz. On the other hand, the paper addresses some methodological issues relative to ‘‘analytic narratives’’. Some political scientists and economists who are both formally and historically minded have proposed to explain historical events in terms of properly mathematical game-theoretic models. We liken the present study to this ‘‘analytic narrative’’ methodology, which we defend against some of objections that it has
I’m visiting the University of Genoa and talking to two category theorists: Marco Grandis and Giuseppe Rosolini. Grandis works on algebraic topology and higher categories, while Rosolini works on the categorical semantics of programming languages. …
Kant saw science as presupposing that the natural laws bring maximal diversity under maximal unity. Many philosophers, such as David Lewis, have regarded objective chances as upshots of science’s aim at systematic unity—as ideal credences projected onto the world. This Kantian projectivism has seemed the only possible way to account for the rational constraint (codified by the ‘Principal Principle’) that our credences about chances impose on our credences regarding what they are chances of. This paper examines three ways of elaborating Lewis’s Kantian strategy for explaining this rational constraint. After arguing that none of these three approaches is unproblematic, the paper proposes a non-Kantian alternative account according to which a chance measures the strength of a causal tendency.
In this paper, I identify two general positions with respect to the relationship between environment and natural selection. These positions consist in claiming that selective claims need and, respectively, need not be relativized to homogenous environments. I then show that adopting one or the other position makes a difference with respect to the way in which the effects of selection are to be measured in certain cases in which the focal population is distributed over heterogeneous environments. Moreover, I show that these two positions lead to two different interpretations – the Pricean and contextualist ones – of a type of selection scenarios in which multiple groups varying in properties affect the change in the metapopulation mean of individual-level traits. Showing that these two interpretations stem from different attitudes towards environmental homogeneity allows me to argue: a) that, unlike the Pricean interpretation, the contextualist interpretation can only claim that drift or selection is responsible for the change in frequency of the focal trait in a given metapopulation if details about whether or not group formation is random are specified; b) that the traditional main objection against the Pricean interpretation – consisting in arguing that the latter takes certain side-effects of individual selection to be effects of group selection – is unconvincing. This leads me to suggest that the ongoing debate about which of the two interpretations is preferable should concentrate on different issues than previously thought.
The Enhanced Indispensability Argument (EIA) appeals to the existence of Mathematical Explanations of Physical Phenomena (MEPPs) to justify mathematical Platonism, following the principle of Inference to the Best Explanation. In this paper, I examine one example of a MEPP —the explanation of the 13-year and 17-year life cycle of magicicadas— and argue that this case cannot be used to justify mathematical Platonism. I then generalize my analysis of the cicada case to other MEPPs, and show that these explanations rely on what I will call ‘optimal representations’, which are representations that capture all that is relevant to explain a physical phenomenon at a specified level of description. In the end, because the role of mathematics in MEPPs is ultimately representational, they cannot be used to support mathematical Platonism. I finish the paper by addressing the claim, advanced by many EIA defendants, that quantification over mathematical objects results in explanations that have more theoretical virtues, especially that they are more general and modally stronger than alternative explanations. I will show that the EIA cannot be successfully defended by appealing to these notions.
. I blogged this exactly 2 years ago here, seeking insight for my new book (Mayo 2017). Over 100 (rather varied) interesting comments ensued. This is the first time I’m incorporating blog comments into published work. …
Starting with the seminal paper , the so-called AGM theory of belief revision has been extensively studied by logicians, computer scientists, and philosophers. The general setup is well-known, and we review it here to fix ideas and notation. Let K be a belief set, a set of propositional formulae closed under classical consequence representing an agent’s initial collection of beliefs. Given a belief ϕ that the agent has acquired, the set K ∗ ϕ represents the agent’s collection of beliefs upon acquiring ϕ. A central project in the theory of belief revision is to study constraints on functions ∗ mapping a belief set K and a propositional formula ϕ to a new belief set K ∗ ϕ. For reference, the key AGM postulates are listed in the Appendix (Section A). This simple framework has been analyzed, extended, and itself revised in various ways (see  for a survey of this literature), and much has been written about the status of its philosophical foundations (cf. [10, 21, 20]).
When thinking about rational agents facing choices, one appealing mathematical model recurs in the literature. From Borges’ story ‘The Garden of Forking Paths’ to a host of technical paradigms, sometimes at war, sometimes at peace, all invoke the picture of a branching tree of finite sequences of events with epistemic indistinguishability relations for agents between these sequences, reflecting their limited powers of observation. Indeed, tree models for computation, with branches standing for process evolutions over time, have long been studied in computer science, cf. [32, 33, 7, 2, 14].
The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N . Further notions of relative plausibility between worlds and beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting ‘uniform’ and ‘flat’ models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.
Building on the work of  and , Savage showed that any agent with a preference ordering satisfying certain intuitive axioms can be represented as an expected utility maximizer . The idea behind Savage’s result is to take as primitive an agent’s (state-based) preference over a set of prizes and define the agent’s beliefs and utilities from its preference. Thus properties of an agent’s beliefs, represented as subjective probability distributions, are derived from properties of the agent’s preferences. See, for example, Chapter 1 of  for a discussion of the literature on the axiomatic foundations of decision theory. Building on Savage’s work and the fundamental contribution by Anscombe and Aumann , a number of different belief operators have been proposed in the literature. Ansheim and Sovik provide an excellent survey of these contributions .
Dynamic epistemic logic, broadly conceived, is the study of logics of information change. This is the first paper in a two-part series introducing this research area. In this paper, I introduce the basic logical systems for reasoning about the knowledge and beliefs of a group of agents.
The literature on the epistemic foundations of game theory uses a variety of mathematical models to formalise talk about the players’ beliefs about the game, beliefs about the rationality of the other players, beliefs about the beliefs of the other players, beliefs about the beliefs about the beliefs of the other players, and so on (see [Bra07] for a recent survey). Examples include Harsanyi’s type spaces ([Har67]), interactive belief structures ([Bra03]), knowledge structures ([Aum76]) plus a variety of logic-based frameworks (see, for example, [Ben01, HM06, Bon02, Boa02, BSZ08]). A recurring issue involves defining a space of all possible beliefs of the players and whether such a space exists. In this paper, we study one such definition: the notion of assumption-complete models. This notion was introduced in [Bra03], where it is formulated in terms of “interactive belief models” (which are essentially qualitative versions of type spaces). Assumption-completeness is also explored in [BK06], where a number of significant results are found, and connections to modal logic are mentioned. A discussion of that paper, and a syntactic proof of its central result, are to be found in [Pac07].
A rational belief must be grounded in the evidence available to an agent. However, this relation is delicate, and it raises interesting philosophical and technical issues. Modeling evidence requires richer structures than found in standard epistemic semantics where the accessible worlds aggregate all reliable evidence gathered so far. Even recent more finely-grained plausibility models ordering the epistemic ranges identify too much: belief is indistinguishable from aggregated best evidence. At the opposite extreme, one might model evidence syntactically as “formulas received”, but this seems overly detailed, and we we lose the intuition that evidence can be semantic in nature, zooming in on some actual world.
A recurring issue in any formal model representing agents’ (changing) informational attitudes is how to account for the fact that the agents are limited in their access to the available inference steps, possible observations and available messages. This may be because the agents are not logically omniscient and so do not have unlimited reasoning ability. But it can also be because the agents are following a predefined protocol that explicitly limits statements available for observation and/or communication. Within the broad literature on epistemic logic, there are a variety of accounts that make precise a notion of an agent’s “limited access” (for example, Awareness Logics, Justification Logics, and Inference Logics). This paper interprets the agents’ access set of formulas as a constraint on the agents’ information gathering process limiting which formulas can be observed.
Deontic Logic goes back to Ernst Mally’s 1926 work, Grundgesetze des Sollens: Elemente der Logik des Willens [Mally. E.: 1926, Grundgesetze des Sollens: Elemente der Logik des Willens, Leuschner & Lubensky, Graz], where he presented axioms for the notion ‘p ought to be the case’. Some difficulties were found in Mally’s axioms, and the field has much developed. Logic of Knowledge goes back to Hintikka’s work Knowledge and Belief [Hintikka, J.: 1962, Knowledge and Belief: An Introduction to the Logic of the Two Notions, Cornell University Press] in which he proposed formal logics of knowledge and belief.
We develop a dynamic modal logic that can be used to model scenarios where agents negotiate over the allocation of a finite number of indivisible resources. The logic includes operators to speak about both preferences of individual agents and deals regarding the reallocation of certain resources. We reconstruct a known result regarding the convergence of sequences of mutually beneficial deals to a Pareto optimal allocation of resources, and discuss the relationship between reasoning tasks in our logic and problems in negotiation. For instance, checking whether a given restricted class of deals is sufficient to guarantee convergence to a Pareto optimal allocation for a specific negotiation scenario amounts to a model checking problem; and the problem of identifying conditions on preference relations that would guarantee convergence for a restricted class of deals under all circumstances can be cast as a question in modal logic correspondence theory.
This is the second paper in a two-part series introducing logics for reasoning about the dynamics of knowledge and beliefs. Part I introduced different logical systems that can be used to reason about the knowledge and beliefs of a group of agents. In this second paper, I show how to adapt these logical systems to reason about the knowledge and beliefs of a group of agents during the course of a social interaction or rational inquiry. Inference, communication and observation are typical examples of informative events, which have been subjected to a logical analysis. The main goal of this article is to introduce the key conceptual and technical issues that drive much of the research in this area.
In this paper we study substantive assumptions in social interaction. By substantive assumptions we mean contingent assumptions about what the players know and believe about each other’s choices and information. We first explain why substantive assumptions are fundamental for the analysis of games and, more generally, social interaction. Then we show that they can be compared formally, and that there exist contexts where no substantive assumptions are being made. Finally we show that the questions raised in this paper are related to a number of issues concerning “large” structures in epistemic game theory.