As Harvey Brown emphasizes in his book Physical Relativity, inertial motion in general relativity is best understood as a theorem, and not a postulate. Here I discuss the status of the “conservation condition”, which states that the energy-momentum tensor associated with non-interacting matter is covariantly divergence-free, in connection with such theorems.
I once gave an argument against euthanasia where the controversial center of the argument could be summarized as follows:
Euthanasia would at most be permissible in cases of valid consent and great suffering. …
guest post by
'Dial 888,' Rick said as the set warmed. 'The desire to watch TV, no matter what's on it. 'I don't feel like dialling anything at all now,' Iran said. 'Then dial 3,' he said. …
This article argues that Thomas Pogge’s important theory of global justice does not adequately appreciate the relation between interactional and institutional accounts of human rights, along with the important normative role of care and solidarity in the context of globalization. It also suggests that more attention needs to be given critically to the actions of global corporations and positively to introducing democratic accountability into the institutions of global governance. The article goes on to present an alternative approach to global justice based on a more robust conception of human rights grounded in a conception of equal positive freedom, in which these rights are seen to apply beyond the coercive political institutions to which Pogge primarily confines them (e.g. to prohibiting domestic violence), and in which they can guide the development of economic, social and political forms to enable their fulfillment.
The practical context for the theoretical reflections in this article is set by two apparently conflicting tendencies: On one side, we have the progression of global economic, technological, and, to a degree, legal and political integration, where this entails a certain diminution of sovereignty. Sovereign nation-states of the so-called Westphalian paradigm, possessing ultimate authority within a territory, are increasingly overwhelmed by the cross-border interconnections or networks that escape their purview; or they are legitimately constrained by new human rights regimes across borders. On the other side, especially in view of the hegemonic activities of the United States, but also in the European Union, new calls for the reestablishment of the sovereignty of nation-states can be heard. This may take the form of a reassertion of a right of states against military interference and a retreat from ideas of humanitarian intervention; or again, it may take the form of an assertion of the priority of nation-states from the standpoint of the administration of welfare or that of the distinctiveness of particular cultures that they sometimes embody. Indeed, a third tendency can also be discerned in present practice: In the face of economic globalization of the first sort, diagnosed as U.S.- led and one-sidedly serving the interests of large industrial societies, but also with an understandable fear of the power of coercive and sometimes violent sovereign nation-states, some actors in the global justice movement seek what they call autonomy, as a self-organization of societies or communities in a diversity of more local forms.
The spectrum argument purports to show that the better-than relation is not transitive, and consequently that orthodox value theory is built on dubious foundations. The argument works by constructing a sequence of increasingly less painful but more drawn-out experiences, such that each experience in the spectrum is worse than the previous one, yet the final experience is better than the experience with which the spectrum began. Hence the betterness relation admits cycles, threatening either transitivity or asymmetry of the relation. This paper examines recent attempts to block the spectrum argument, using the idea that it is a mistake to affirm that every experience in the spectrum is worse than its predecessor: an alternative hypothesis is that adjacent experiences may be incommensurable in value, or that due to vagueness in the underlying concepts, it is indeterminate which is better. While these attempts formally succeed as responses to the spectrum argument, they have additional, as yet unacknowledged costs that are significant. In order to effectively block the argument in its most typical form, in which the first element is radically inferior to the last, it is necessary to suppose that the incommensurability (or indeterminacy) is particularly acute: what might be called radical incommensurability (radical indeterminacy). We explain these costs, and draw some general lessons about the plausibility of the available options for those who wish to save orthodox axiology from the spectrum argument.
The need for expressing temporal constraints in conceptual models is well-known, but it is unclear which representation is preferred and what would be easier to understand by modellers. We assessed five different modes of representing temporal constraints, being the formal semantics, Description logics notation, a coding-style notation, temporal EER diagrams, and (pseudo-)natural language sentences. The same information was presented to 15 participants in an experimental evaluation. Principally, it showed that 1) there was a clear preference for diagrams and natural language versus a dislike for other representations; 2) diagrams were preferred for simple constraints, but the natural language rendering was preferred for more complex temporal constraints; and 3) a multi-modal modelling tool will be needed for the data analysis stage to be effective.
According to priority monism there are many concrete entities and there is one, the cosmos, that is ontologically prior to all the others. I begin by clarifying this thesis as well as its main rival, priority atomism. I show how the disagreement between the priority monist and atomist ultimately turns on how the thesis of concrete foundationalism is implemented. While it’s standard to interpret priority monism as being metaphysically non-contingent, I show that there are two competing, prima facie plausible conceptions of metaphysical necessity—the essence-based and law-based conceptions—on which it is reasonable to view its modal status differently. This, I suggest, is good for the priority monist—various objections to the thesis presuppose that it’s metaphysically non-contingent, while there are arguments for the thesis that don’t make the presupposition.
In this paper I discuss the delayed choice quantum eraser experiment by giving a straightforward account in standard quantum mechanics. At first glance, the experiment suggests that measurements on one part of an entangled photon pair (the idler) can be employed to control whether the measurement outcome of the other part of the photon pair (the signal) produces interference fringes at a screen after being sent through a double slit. Significantly, the choice whether there is interference or not can be made long after the signal photon encounters the screen. The results of the experiment have been alleged to invoke some sort of ‘backwards in time influences’. I argue that in the standard collapse interpretation the issue can be eliminated by taking into account the collapse of the overall entangled state due to the signal photon. Likewise, in the de Broglie-Bohm picture the particle’s trajectories can be given a well-defined description at any instant of time during the experiment. Thus, there is no need to resort to any kind of ‘backwards in time influence’. As a matter of fact, the delayed choice quantum eraser experiment turns out to resemble a Bell-type measurement, and so there really is no mystery.
E.S. Pearson (11 Aug, 1895-12 June, 1980)
This is a belated birthday post for E.S. Pearson (11 August 1895-12 June, 1980). It’s basically a post from 2012 which concerns an issue of interpretation (long-run performance vs probativeness) that’s badly confused these days. …
In the comments on the previous post I was alerted, by Matthias Michel, to a couple of papers that I had not yet read. The first was a paper in Neuroscience Research which came out in 2016:
Using category theory to assess the relationship between consciousness and integrated information theory by Naotsugu Tsuchiya, Shigeru Taguchi, and Hayato Saigo
And the second was a paper in Philosophy Compass that came out in March 2017:
“What is it like to be a bat?”—a pathway to the answer from the integrated information theory by Naotsugu Tsuchiya
After reading these I realized that I had heard an early version of this stuff when I was part of a plenary session with Tsuchiya in Tucson back in April of 2016. …
There’s a new paper on the arXiv that claims to solve a hard problem:
• Norbert Blum, A solution of the P versus NP problem. Most papers that claim to solve hard math problems are wrong: that’s why these problems are considered hard. …
It is widely accepted that you cannot force someone to make a valid promise. If a robber after finding that I have no valuables with me puts a gun to my head and says: “I will shoot you unless you promise to go home and bring me all of the jewelry there”, and I say “I promise”, my promise seems to be null and void. …
It is valuable, especially for philosophers, to learn languages in order to learn to see things from a different point of view, to think differently. This is usually promoted with respect to natural languages. …
We owe to Frege in Begriffsschrift our modern practice of taking unrestricted quantification (in one sense) as basic. I mean, he taught us how to rephrase restricted quantifications by using unrestricted quantifiers plus connectives in the now familiar way, so that e.g. …
I'm still thinking about that Time op-ed that objects to gene editing because, in so many words, if my parents had used it, my brother wouldn't have existed. I went along with this assumption about existence in my last post on the subject, but only for the sake of argument. …
David Shoemaker says:
August 14, 2017 at 2:53 pm
[The following comment was submitted by David Luban. Luban’s book Torture, Power, and Law (Cambridge University Press, 2014) contains a critique of Allhoff’s book. …
I’ve revised the completeness theorem thoroughly. (This was issue 38.) The main change is that instead of constructing a maximally consistent set, we construct a complete and consistent set. Of course, those are extensionally the same; but both the reason for why we need them and the way we construct them is directly related to completeness and only indirectly to maximal consistency: We want a set that contains exactly one of A or ¬A for every sentence so we can define a term model and prove the truth lemma. …
Schupbach and Sprenger (2011) introduce a novel probabilistic approach to measuring the explanatory power that a given explanans exerts over a corresponding explanandum. Though we are sympathetic to their general approach, we argue that it does not (without revision) adequately capture the way in which the causal explanatory power that c exerts on e varies with background knowledge. We then amend their approach so that it does capture this variance. Though our account of explanatory power is less ambitious than Schupbach and Sprenger’s in the sense that it is limited to causal explanatory power, it is also more ambitious because we do not limit its domain to cases where c genuinely explains e. Instead, we claim that c causally explains e if and only if our account says that c explains e with some positive amount of causal explanatory power.
In this chapter, I will discuss what it takes for a dynamical collapse theory to provide a reasonable description of the actual world. I will start with discussions of what is required, in general, of the ontology of a physical theory, and then apply it to the quantum case. One issue of interest is whether a collapse theory can be a quantum state monist theory, adding nothing to the quantum state and changing only its dynamics. Although this was one of the motivations for advancing such theories, its viability has been questioned, and it has been argued that, in order to provide an account of the world, a collapse theory must supplement the quantum state with additional ontology, making such theories more like hidden-variables theories than would first appear. I will make a case for quantum state monism as an adequate ontology, and, indeed, the only sensible ontology for collapse theories. This will involve taking dynamical variables to possess, not sharp values, as in classical physics, but distributions of values.
I discuss a game-theoretic model in which scientists compete to finish the intermediate stages of some research project. Banerjee et al. (2014) have previously shown that if the credit awarded for intermediate results is proportional to their difficulty, then the strategy profile in which scientists share each intermediate stage as soon as they complete it is a Nash equilibrium. I show that the equilibrium is both unique and strict. Thus rational credit-maximizing scientists have an incentive to share their intermediate results, as long as this is sufficiently rewarded.
Persistence judgments are ordinary judgments about whether an object survives a change, or perishes. For instance, if a house fire only superficially damages the kitchen, people judge that the house survived. But if the fire burnt the house to the ground instead, people judge that the house did not survive but was instead destroyed. We are interested in what drives these judgments, in part because objects are so central to our conception of the world, and our persistence judgments get to the very heart of the folk notion of an object.
In models for paraconsistent logics, the semantic values of sentences and their negations are less tightly connected than in classical logic. In “American Plan” logics for negation, truth and falsity are, to some degree, independent. The truth of ∼p is given by the falsity of p, and the falsity of ∼p is given by the truth of p. Since truth and falsity are only loosely connected, p and ∼p can both hold, or both fail to hold. In “Australian Plan” logics for negation, negation is treated rather like a modal operator, where the truth of ∼p in a situation amounts to p failing in certain other situations. Since those situations can be different from this one, p and ∼p might both hold here, or might both fail here.
The main goal of this article is to defend non-metacognitive interpretations of both the question-asking and question-answering behavior of young children. Rather than manifesting awareness of their own states of knowledge or ignorance (as many in the field assume), such behavior is best seen as dependent upon a set of first-order (non-metacognitive) questioning attitudes, such as curiosity. In addition, the role of such attitudes in other aspects of development is briefly considered.
This essay focuses on personal love, or the love of particular persons
as such. Part of the philosophical task in understanding personal love
is to distinguish the various kinds of personal love. For example, the
way in which I love my wife is seemingly very different from the way I
love my mother, my child, and my friend. This task has typically
proceeded hand-in-hand with philosophical analyses of these kinds of
personal love, analyses that in part respond to various puzzles about
love. Can love be justified? If so, how? What is the value of personal
love? What impact does love have on the autonomy of both the lover and
Why are cognitive disability and moral status thought to be
sufficiently connected to warrant a separate entry? The reason is that
individuals with cognitive disabilities have served as test cases in
debates about the moral relevance of possessing such intellectual
attributes as self-consciousness and practical rationality. If a
significant portion of human beings lacks self-consciousness and
practical rationality, then those attributes cannot by themselves
distinguish the way we treat cognitively developed human beings from
the way we treat non-human animals and human fetuses. If we cannot
experiment on or kill human beings who lack those attributes, then the
lack of those attributes alone cannot be what justifies animal
experimentation or abortion.
Though many philosophers agree that stakes play a role in ordinary knowledge ascriptions, there is disagreement about what explains this. One view, epistemic contextualism, holds that “to know” is a context sensitive verb and that the truth conditions for knowledge ascriptions can vary across conversational contexts (e.g., DeRose, 2009). For instance, Bob’s statement “I know the bank will be open tomorrow” can be true in low stakes contexts and false in high stakes contexts. Another view, interest-relative invariantism, denies that “to know” is a context sensitive verb and that the truth conditions for knowledge ascriptions vary according to conversational contexts. Instead, cases like the Bank cases show that practical factors—i.e., stakes—play a distinctive role in determining whether the knowledge relation obtains (e.g., Stanley, 2005). Yet another alternative, which we’ll call classical invariantism, denies that “to know” is a context sensitive verb and that practical factors, such as stakes, play a direct role in determining whether the knowledge relation obtains. Instead, stakes affect knowledge ascriptions only by affecting our assessment of factors that have traditionally been taken to constitute or be necessary for knowledge, such as e.g., belief, quality of evidence, etc. (e.g., Bach, 2005; Weatherson, 2005; Ganson, 2007; Nagel, 2008). If this is right, then the role of stakes in knowledge ascriptions fails to motivate such surprising views as epistemic contextualism or interest-relative invariantism. Naturally, epistemic contextualists and interest-relative invariantists deny this, claiming that even when the factors that have traditionally been taken to constitute or be necessary for knowledge are held fixed, stakes continue to play a role in ordinary knowledge ascriptions (e.g., DeRose, 2009; Lawlor, 2013).
The authors argue in favor of the “nonconciliation” (or “steadfast”) position concerning the problem of peer disagreement. Throughout the paper they place heavy emphasis on matters of phenomenology—on how things seem epistemically with respect to the net import of one’s available evidence vis-à-vis the disputed claim p, and on how such phenomenology is affected by the awareness that an interlocutor whom one initially regards as an epistemic peer disagrees with oneself about p. Central to the argument is a nested goal/sub-goal hierarchy that the authors claim is inherent to the structure of epistemically responsible belief-formation: pursuing true beliefs by pursuing beliefs that are objectively likely given one’s total available evidence; pursuing this sub-goal by pursuing beliefs that are likely true (given that evidence) relative to one’s own deep epistemic sensibility; and pursuing this sub-sub-goal by forming beliefs in accordance with one’s own all-in, ultima facie, epistemic seemings.
Illustration by Slate
Last week a team of 72 scientists released the preprint of an article attempting to address one aspect of the reproducibility crisis, the crisis of conscience in which scientists are increasingly skeptical about the rigor of our current methods of conducting scientific research. …
A core question of contemporary social morality concerns how we ought to handle racial categorization. By this we mean, for instance, classifying or thinking of a person as Black, Korean, Latino, White, etc.² While it is widely FN:2 agreed that racial categorization played a crucial role in past racial oppression, there remains disagreement among philosophers and social theorists about the ideal role for racial categorization in future endeavors. At one extreme of this disagreement are short-term eliminativists who want to do away with racial categorization relatively quickly (e.g. Appiah, 1995; D’Souza, 1996; Muir, 1993; Wasserstrom, 2001/1980; Webster, 1992; Zack, 1993, 2002), typically because they view it as mistaken and oppressive. At the opposite end of the spectrum, long-term conservationists hold that racial identities and communities are beneficial, and that racial categorization —suitably reformed —is essential to fostering them (e.g. Outlaw, 1990, 1995, 1996). While extreme forms of conservationism have fewer proponents in academia than the most radical eliminativist positions, many theorists advocate more moderate positions. In between the two poles, there are many who believe that racial categorization is valuable (and perhaps necessary) given the continued existence of racial inequality and the lingering effects of past racism (e.g. Haslanger, 2000; Mills, 1998; Root, 2000; Shelby, 2002, 2005; Sundstrom, 2002; Taylor, 2004; Young, 1989). Such authors agree on the short-term need for racial categorization in at least some domains, but they often differ with regard to its long-term value.