When deciding how ‘death’ should be defined, it is helpful to consider cases in which vital functions are restored to an organism long after those vital functions have ceased. Here I consider whether such restoration cases can be used to refute termination theses. Focusing largely on the termination thesis applied to human animals (the view that when human animals die they cease to exist), I develop a line of argument from the possibility of human restoration to the conclusion that in many actual cases, human animals continue to exist after they die. The line of reasoning developed here can be extended to show that other organisms survive death in many actual cases. This line of reasoning improves on other arguments that have been offered against termination theses. And if my argument regarding human animals surviving death is successful, then assuming that human persons are animals, we can also conclude that human persons in many actual cases continue to exist after death.
Wiggins’ (2012) argument against propositional accounts of knowing how is based on a development of some considerations taken from Aristotle’s Nicomachean Ethics. Aristotle argued that the knowledge needed for participation in an ethos cannot be codified in propositional form so as to let it be imparted to someone who did not already have it. This is because any putative codification would be incomplete, and require that knowledge in order to extend it to novel cases. On a reasonable interpretation of his argument, Wiggins claims that the same goes for practical knowledge in general, and that this shows that a propositional view of knowing how is incorrect. This paper shows that this argument is unsound.
Omnipotence is maximal power. Maximal greatness (or perfection)
includes omnipotence. According to traditional Western theism, God is
maximally great (or perfect), and therefore is omnipotent. Omnipotence
seems puzzling, even paradoxical, to many philosophers. They wonder,
for example, whether God can create a spherical cube, or make a stone
so massive that he cannot move it. Is there a consistent analysis of
omnipotence? What are the implications of such an analysis for the
nature of God?
Until fairly recently secession has been a neglected topic among
philosophers. Two factors may explain why philosophers have now begun
to turn their attention to secession. First, in the past two decades
there has been a great increase not only in the number of attempted
secessions, but also in successful secessions, and philosophers may
simply be reacting to this new reality, attempting to make normative
sense of it. The reasons for the frequency of attempts to secede are
complex, but there are two recent developments that make the prospect
of state-breaking more promising: improvement in national security and
liberalization of trade.
We reflect on the information paradigm in quantum and gravitational physics and on how it may assist us in approaching quantum gravity. We begin by arguing, using a reconstruction of its formalism, that quantum theory can be regarded as a universal framework governing an observer’s acquisition of information from physical systems taken as information carriers. We continue by observing that the structure of spacetime is encoded in the communication relations among observers and more generally the information flow in spacetime. Combining these insights with an information-theoretic Machian view, we argue that the quantum architecture of spacetime can operationally be viewed as a locally finite network of degrees of freedom exchanging information. An advantage – and simultaneous limitation – of an informational perspective is its quasi-universality, i.e. quasi-independence of the precise physical incarnation of the underlying degrees of freedom. This suggests to exploit these informational insights to develop a largely microphysics independent top-down approach to quantum gravity to complement extant bottom-up approaches by closing the scale gap between the unknown Planck scale physics and the familiar physics of quantum (field) theory and general relativity systematically from two sides. While some ideas have been pronounced before in similar guise and others are speculative, the way they are strung together and justified is new and supports approaches attempting to derive emergent spacetime structures from correlations of quantum degrees of freedom.
Are you a liberal, socialist or conservative? Are you fiscally conservative but socially liberal? Or socially conservative and fiscally liberal? Are you a classical liberal or a neo-liberal? Are you a Marxist socialist or a neo-Marxist socialist? …
1. Synthese recently published Pierre Wagner's article Carnapian and Tarskian Semantics, which outlines some important differences between semantics as Tarski conceived it (at least in the 1930s-40s) and as Carnap conceived it. …
According to the Fine-Tuning Argument (FTA), the existence of life in our universe confirms the Multiverse Hypothesis (HM). A standard objection to FTA is that it violates the Requirement of Total Evidence (RTE). I argue that RTE should be rejected in favor of the Predesignation Requirement, according to which, in assessing the outcome of a probabilistic process, we should only use evidence characterizable in a manner available prior to observing the outcome. This produces the right verdicts in some simple cases in which RTE leads us astray; and, when applied to FTA, it shows that our evidence does confirm HM.
Although the proper definition of ‘rape’ is itself a
matter of some dispute, rape is generally understood to involve sexual
penetration of a person by force and/or without that person's consent. Rape is committed overwhelmingly by men and boys, usually against
women and girls, and sometimes against other men and boys. (For the
most part, this entry will assume male perpetrators and female
victims.) Virtually all feminists agree that rape is a grave wrong, one too
often ignored, mischaracterized, and legitimized. Feminists differ,
however, about how the crime of rape is best understood, and about how
rape should be combated both legally and socially.
In this paper I want to consider the implications of materialism about the human mind for a scientific understanding of consciousness. I shall argue that, while science can tell us many exciting things about human consciousness, it won’t be able to pinpoint any specific material property that constitutes seeing something red, say, or being in pain, or indeed that constitutes being conscious (that is, feeling like something rather than nothing). Not that this means there are definite facts about consciousness about which science must permanently remain silent. Rather the difficulty lies with our concepts of conscious properties, which are vague in certain crucial respects.
Pragmatic responses to skepticism have been overlooked in recent decades. This paper explores one such response by developing a character called the Pragmatic Skeptic. The Pragmatic Skeptic accepts skeptical arguments for the claim that we lack good evidence for our ordinary beliefs, and that they do not constitute knowledge. However, they do not think we should give up our beliefs in light of these skeptical conclusions. Rather, we should retain them, since we have good practical reasons for doing so. This takes the sting out of skepticism: we can be skeptics, of a kind, without thereby succumbing to practical or intellectual disaster. I respond to objections, and compare the position of the Pragmatic Skeptic to views found in the work of (among others) David Hume, William James, David Lewis, Crispin Wright, Berislav Marusic, and Robert Pasnau.
The paper has a twofold aim. On the one hand, it provides what appears to be the first game-theoretic modeling of Napole´on’s last campaign, which ended dramatically on June 18, 1815, at Waterloo. It is specifically concerned with the decision Napole´on made on June 17, 1815, to detach part of his army and send it against the Prussians, whom he had defeated, though not destroyed, on June 16 at Ligny. Military strategists and historians agree that this decision was crucial but disagree about whether it was rational. Hypothesizing a zero-sum game between Napole´on and Blu¨cher, and computing its solution, we show that dividing his army could have been a cautious strategy on Napole´on’s part, a conclusion which runs counter to the charges of misjudgment commonly heard since Clausewitz. On the other hand, the paper addresses some methodological issues relative to ‘‘analytic narratives’’. Some political scientists and economists who are both formally and historically minded have proposed to explain historical events in terms of properly mathematical game-theoretic models. We liken the present study to this ‘‘analytic narrative’’ methodology, which we defend against some of objections that it has
I’m visiting the University of Genoa and talking to two category theorists: Marco Grandis and Giuseppe Rosolini. Grandis works on algebraic topology and higher categories, while Rosolini works on the categorical semantics of programming languages. …
This paper responds to a new objection, due to Ben Bramble, against attitudinal theories of sensory pleasure and pain: the objection from unconscious pleasures and pains. According to the objection, attitudinal theories are unable to accommodate the fact that sometimes we experience pleasures and pains of which we are, at the time, unaware. In response, I distinguish two kinds of unawareness and argue that the subjects in the examples that support the objection are unaware of their sensations in only a weak sense, and this weak sort of unawareness of a sensation does not preclude its being an object of one’s attitudes.
In this paper, I will argue that metaphysicians ought to utilize quantum theories of gravity (QG) as incubators for a future metaphysics. In §2, I will argue why this ought to be done. In §3, I will present case studies from the history of science where physical theories have challenged both the dogmatic and speculative metaphysician. In §4, I will present two theories of QG and demonstrate the challenge they pose to certain aspects of our current metaphysics; in particular, how they challenge our understanding of the abstract-concrete distinction. The central goal of this paper is to encourage metaphysicians to look to physical theories, especially those involving cosmology such as string theory and loop quantum gravity, when doing metaphysics.
Kant saw science as presupposing that the natural laws bring maximal diversity under maximal unity. Many philosophers, such as David Lewis, have regarded objective chances as upshots of science’s aim at systematic unity—as ideal credences projected onto the world. This Kantian projectivism has seemed the only possible way to account for the rational constraint (codified by the ‘Principal Principle’) that our credences about chances impose on our credences regarding what they are chances of. This paper examines three ways of elaborating Lewis’s Kantian strategy for explaining this rational constraint. After arguing that none of these three approaches is unproblematic, the paper proposes a non-Kantian alternative account according to which a chance measures the strength of a causal tendency.
In Colin Maclaurin’s four-volume An Account of Sir Issac Newton’s Philosophical Discoveries, published posthumously by his wife Anne, he responds in a footnote to Spinoza’s “Epistle 15,” the so-called “worm in the blood” letter. …
I'm working through Daniel Batson's latest book, What's Wrong with Morality? Batson distinguishes between four different types of motives for seemingly moral behavior, each with a different type of ultimate goal. …
Mark Schroeder has recently proposed a new analysis of knowledge. I examine that analysis and show that it fails. More specifically, I show that it faces a problem all too familiar from the post-Gettier literature, namely, that it is delivers the wrong verdict in fake barn cases.
In this paper, I identify two general positions with respect to the relationship between environment and natural selection. These positions consist in claiming that selective claims need and, respectively, need not be relativized to homogenous environments. I then show that adopting one or the other position makes a difference with respect to the way in which the effects of selection are to be measured in certain cases in which the focal population is distributed over heterogeneous environments. Moreover, I show that these two positions lead to two different interpretations – the Pricean and contextualist ones – of a type of selection scenarios in which multiple groups varying in properties affect the change in the metapopulation mean of individual-level traits. Showing that these two interpretations stem from different attitudes towards environmental homogeneity allows me to argue: a) that, unlike the Pricean interpretation, the contextualist interpretation can only claim that drift or selection is responsible for the change in frequency of the focal trait in a given metapopulation if details about whether or not group formation is random are specified; b) that the traditional main objection against the Pricean interpretation – consisting in arguing that the latter takes certain side-effects of individual selection to be effects of group selection – is unconvincing. This leads me to suggest that the ongoing debate about which of the two interpretations is preferable should concentrate on different issues than previously thought.
The Enhanced Indispensability Argument (EIA) appeals to the existence of Mathematical Explanations of Physical Phenomena (MEPPs) to justify mathematical Platonism, following the principle of Inference to the Best Explanation. In this paper, I examine one example of a MEPP —the explanation of the 13-year and 17-year life cycle of magicicadas— and argue that this case cannot be used to justify mathematical Platonism. I then generalize my analysis of the cicada case to other MEPPs, and show that these explanations rely on what I will call ‘optimal representations’, which are representations that capture all that is relevant to explain a physical phenomenon at a specified level of description. In the end, because the role of mathematics in MEPPs is ultimately representational, they cannot be used to support mathematical Platonism. I finish the paper by addressing the claim, advanced by many EIA defendants, that quantification over mathematical objects results in explanations that have more theoretical virtues, especially that they are more general and modally stronger than alternative explanations. I will show that the EIA cannot be successfully defended by appealing to these notions.
. I blogged this exactly 2 years ago here, seeking insight for my new book (Mayo 2017). Over 100 (rather varied) interesting comments ensued. This is the first time I’m incorporating blog comments into published work. …
In his An Essay Concerning Human Understanding, Locke’s primary aim is to provide an empiricist theory of ideas that can support interesting results about the nature of language and knowledge. Within this theory, Locke distinguishes between simple ideas and complex ideas (E II.ii.1: 119). Roughly, an idea is complex if it has other ideas as parts; otherwise, it is simple. For Locke, as is well known, all simple ideas derive from sensation (perception through sight, taste, smell, hearing, or touch) or reflection (a form of introspection directed at mental acts) (E II.i.2-4: 104-106). Aetiology also plays a role in Locke’s classification of complex ideas: ideas of modes, ideas of substances, and ideas of relations. All complex ideas are formed by a voluntary act of combination or composition. Ideas of modes, such as numbers, beauty, and theft (E II.xii.5: 165) are formed without considering whether the combinations conform to real patterns existing in the world (E II.xi.6: 158, E II.xxii.1: 288, E II.xxxi.3: 376). Ideas of substances (such as human beings, sheep, and armies – E II.xii.6: 165), by contrast, are formed with a desire “to copy Things, as they really do exist” (E II.xxxi.3: 377). Ideas of relations are like ideas of modes (E II.xxxi.14: 383-384), except that their aetiology includes, in addition to the mental act of composition, the distinct mental act of comparison on the basis of some respect or dimension (E II.xi.4: 157, E II.xxv.1: 319).
Co-speech gestures have been reported to give rise to so-called cosuppositional inferences (Schlenker 2015, 2016). For example, a sentence like “John will not [use the stairs] UP”, produced with an UP gesture (finger pointed upwards) co-occurring with the verb phrase is argued to give rise to the conditional presupposition that if John were to use the stairs, he would go up the stairs. Such a presuppositional treatment of the gestural inference predicts that it should project out of certain linguistic environments. We tested this prediction using an Inferential Judgment Task, in which participants had to rate the strength of inferences arising from the use of the co-speech gestures UP and DOWN, when produced with the predicate “use the stairs”, in six different linguistic environments: plain affirmative and negative sentences, modal sentences containing “might”, and quantified sentences involving “each”, “none”, and “exactly one”. The results provide evidence that the conditional inference projects from the scope of negation, and projects universally from the scope of “none” and “exactly one”. In addition, the data suggest that the cosupposition can also be locally accommodated in the scope of negation and “none”.
Ugliness is a neglected topic in contemporary analytic aesthetics. This is regrettable given that this topic is not just genuinely fascinating, but could also illuminate other areas in the field, seeing as ugliness, albeit unexplored, does feature rather prominently in several debates in aesthetics. This paper articulates a ‘deformity-related’ conception of ugliness. Ultimately, I argue that deformity, understood in a certain way, and displeasure, jointly suffice for ugliness. First, I motivate my proposal, by locating a ‘deformity-related’ conception of ugliness in aesthetic tradition, offering examples in support, and rejecting related alternative suggestions. Second, I argue that the proposal boasts considerable merits. Not only does it capture much of what we ordinarily think of as ugly, but it also comprises an objective criterion for ugliness, offers unity and comprehensiveness, and is informative and explanatorily potent. Third, I discuss a number of objections, thereby demonstrating that the proposal withstands reflective scrutiny.
Starting with the seminal paper , the so-called AGM theory of belief revision has been extensively studied by logicians, computer scientists, and philosophers. The general setup is well-known, and we review it here to fix ideas and notation. Let K be a belief set, a set of propositional formulae closed under classical consequence representing an agent’s initial collection of beliefs. Given a belief ϕ that the agent has acquired, the set K ∗ ϕ represents the agent’s collection of beliefs upon acquiring ϕ. A central project in the theory of belief revision is to study constraints on functions ∗ mapping a belief set K and a propositional formula ϕ to a new belief set K ∗ ϕ. For reference, the key AGM postulates are listed in the Appendix (Section A). This simple framework has been analyzed, extended, and itself revised in various ways (see  for a survey of this literature), and much has been written about the status of its philosophical foundations (cf. [10, 21, 20]).
When thinking about rational agents facing choices, one appealing mathematical model recurs in the literature. From Borges’ story ‘The Garden of Forking Paths’ to a host of technical paradigms, sometimes at war, sometimes at peace, all invoke the picture of a branching tree of finite sequences of events with epistemic indistinguishability relations for agents between these sequences, reflecting their limited powers of observation. Indeed, tree models for computation, with branches standing for process evolutions over time, have long been studied in computer science, cf. [32, 33, 7, 2, 14].
The intuitive notion of evidence has both semantic and syntactic features. In this paper, we develop an evidence logic for epistemic agents faced with possibly contradictory evidence from different sources. The logic is based on a neighborhood semantics, where a neighborhood N indicates that the agent has reason to believe that the true state of the world lies in N . Further notions of relative plausibility between worlds and beliefs based on the latter ordering are then defined in terms of this evidence structure, yielding our intended models for evidence-based beliefs. In addition, we also consider a second more general flavor, where belief and plausibility are modeled using additional primitive relations, and we prove a representation theorem showing that each such general model is a p-morphic image of an intended one. This semantics invites a number of natural special cases, depending on how uniform we make the evidence sets, and how coherent their total structure. We give a structural study of the resulting ‘uniform’ and ‘flat’ models. Our main result are sound and complete axiomatizations for the logics of all four major model classes with respect to the modal language of evidence, belief and safe belief. We conclude with an outlook toward logics for the dynamics of changing evidence, and the resulting language extensions and connections with logics of plausibility change.
Building on the work of  and , Savage showed that any agent with a preference ordering satisfying certain intuitive axioms can be represented as an expected utility maximizer . The idea behind Savage’s result is to take as primitive an agent’s (state-based) preference over a set of prizes and define the agent’s beliefs and utilities from its preference. Thus properties of an agent’s beliefs, represented as subjective probability distributions, are derived from properties of the agent’s preferences. See, for example, Chapter 1 of  for a discussion of the literature on the axiomatic foundations of decision theory. Building on Savage’s work and the fundamental contribution by Anscombe and Aumann , a number of different belief operators have been proposed in the literature. Ansheim and Sovik provide an excellent survey of these contributions .
Dynamic epistemic logic, broadly conceived, is the study of logics of information change. This is the first paper in a two-part series introducing this research area. In this paper, I introduce the basic logical systems for reasoning about the knowledge and beliefs of a group of agents.