Let’s suppose someone says that Gödel is the man who proved the incompleteness of arithmetic. . . . In the case of Gödel that’s practically the only thing many people have heard about him—that he discovered the incompleteness of arithmetic. Does it follow that [for such people] whoever discovered the incompleteness of arithmetic is the referent of ‘Godel’? . . . Suppose that Gödel was not in fact the author of this theorem. A man named ‘Schmidt’, whose body was found in Vienna under mysterious circumstances many years ago, actually did the work in question. His friend Gödel somehow got hold of the manuscript and it was thereafter attributed to Gödel. On the view in question, then, . . . since the man who discovered the incompleteness of arithmetic is in fact Schmidt, we [who have heard nothing else about Gödel], when we talk about ‘Gödel’, are in fact always referring to Schmidt. But it seems to me that we are not. We simply are not. (Kripke, 1980, pp. 83–4) The judgement Kripke reports here is often regarded as a paradigmatic case of an appeal to ‘philosophical intuition’, and such appeals have been the subject of much recent debate. This particular one attracted the attention, some years ago, of Machery, Mallon, Nichols, and Stich (MMN&S), who were then at the leading edge of the emerging ‘experimental philosophy’ movement. Their paper “Semantics, Cross-Cultural Style” reported the results of experiments that show, or so they claimed, that such intutions vary cross-culturally. In particular, although ‘Westerners’ do tend to agree with Kripke, ‘East Asians’ tend to disagree.
Maybe the most important argument in David Chalmers’s monumental book Constructing the World (Chalmers, 2012) is the one he calls the ‘Frontloading Argument’, which is used in Chapter 4 to argue for the book’s central thesis, A Priori Scrutability. And, prima facie, the Front-loading Argument looks very strong. I shall be arguing here, however, that it is incapable of securing the conclusion it is meant to establish. My interest is not in the conclusion for which Chalmers is arguing. As it happens, I am skeptical about A Priori Scrutability. Indeed, my views about the a priori are closer to Quine’s than to Chalmers’s. But my goal here is not to argue for any substantive conclusion but just for a dialectical one: Despite its initial appeal, the Frontloading Argument fails as an argument for A Priori Scrutability.
Ladyman and Ross (LR) argue that quantum objects are not individuals (or are at most weakly discernible individuals) and use this idea to ground their metaphysical view, ontic structural realism, according to which relational structures are primary to things. LR acknowledge that there is a version of quantum theory, namely the Bohm theory (BT), according to which particles do have definite trajectories at all times. However, LR interpret the research by Brown et al. as implying that “raw stuff” or haecceities are needed for the individuality of particles of BT, and LR dismiss this as idle metaphysics. In this paper we note that Brown et al.’s research does not imply that haecceities are needed. Thus BT remains as a genuine option for those who seek to understand quantum particles as individuals. However, we go on to discuss some problems with BT which led Bohm and Hiley to modify it.
This paper explores the theme “quantum approaches to consciousness” by considering the work of one of the pioneers in the field. The physicist David Bohm (1917-1992) not only made important contributions to quantum physics, but also had a long-term interest in interpreting the results of quantum physics and relativity in order to develop a general world view. His idea was further that living and mental processes could be understood in a new, scientifically and philosophically more coherent way in the context of such a new world view. This paper gives a brief overview of different – and sometimes contradictory - aspects of Bohm’s research programme, and evaluates how they can be used to give an account of topics of interest in contemporary consciousness studies, such as analogies between thought and quantum processes, the problem of mental causation, the mind-body problem and the problem of time consciousness.
Some recent accounts of constitutive relevance have identified mechanism components with entities that are causal intermediaries between the input and output of a mechanism. I argue that on such accounts there is no distinctive inter-level form of mechanistic explanation and that this highlights an absence in the literature of a compelling argument that there are such explanations. Nevertheless, the entities that these accounts call ‘components’ do play an explanatory role. Studying causal intermediaries linking variables X and Y provides knowledge of the counterfactual conditions under which X will continue to bring about Y. This explanatory role does not depend on whether intermediate variables count as components. The question of whether there are distinctively mechanistic explanations remains open.
Information theory presupposes the notion of an epistemic agent, such as a scientist or an idealized human. Despite that, information theory is increasingly invoked by physicists concerned with fundamental physics, physics at very high energies, or generally with the physics of situations in which even idealized epistemic agents cannot exist. In this paper, I shall try to determine the extent to which the application of information theory in those contexts is legitimate. I will illustrate my considerations using the case of black hole thermodynamics and Bekenstein’s celebrated argument for his formula for the entropy of black holes. This example is particularly pertinent to the theme of the present collection because it is widely accepted as ‘empirical data’ in notoriously empirically deprived quantum gravity, even though the laws of black hole thermodynamics have so far evaded direct empirical confirmation.
This paper considers the importance of unification in the context of developing scientific theories. I argue that unifying hypotheses are not valuable simply because they are supported by multiple lines of evidence. Instead, they can be valuable because they guide experimental research in different domains in such a way that the results from those experiments inform the scope of the theory being developed. I support this characterization by appealing to the early development of quantum theory. I then draw some comparisons with discussions of robustness reasoning.
Many religions offer hope for a life that transcends death and believers find great comfort in this. Non-believers typically do not have such hopes. In the face of death, they may find consolation in feeling contented with the life they have lived. But do they have hopes? I will identify a range of distinctly secular hopes at the end of life. Nothing stops religious people from sharing these secular hopes, in addition to their hope for eternal life. I will distinguish between (a) hopes about one’s life, (b) hopes about one’s death, (c) hopes about attitudes of others, and (d) hopes about the future. But before turning to these hopes, I will reflect on the following question: What is it that would keep a person from hoping for eternal life?
Forthcoming in Philosophical Studies. Maybe the most important argument in David Chalmers's monumental book Constructing the World is the one he calls the 'Frontloading Argument', which is used in Chapter 4 to argue for the book's central thesis, A Priori Scrutability. …
Forthcoming in The Review of Philosophy and Psychology. Some years ago, Machery, Mallon, Nichols, and Stich reported the results
of experiments that reveal, they claim, cross-cultural differences in
speakers' `intuitions' about Kripke's famous Gödel-Schmidt case. …
We hear the term bandied about all the time. A man cheats on his wife. We are told that this is simply part of his 'nature’ - that men have evolved to be philanderers. Two young men fight on the streets, taunting and goading each other on. …
This paper presents an account of the semantic content and conventional discourse effects of a range of sentence types in English, namely falling declaratives, polar interrogatives, and certain kinds of rising declaratives and tag interrogatives. The account aims to divide the labor between compositional semantics and conventions of use in a principled way. We argue that falling declaratives and polar interrogatives are unmarked sentence types. On our account, differences in their conventional discourse effects follow from independently motivated semantic differences combined with a single convention of use, which applies uniformly to both sentence types. As a result, the Fregean ‘illocutionary force operators’ Assertion and Question become unnecessary. In contrast, we argue that rising declaratives and tag interrogatives are marked sentence types. On our account, their conventional discourse effects consist of the effects that are dictated by the basic convention of use that is common to all sentence types considered here, augmented with special effects that are systematically connected to their formal properties. Thus, a central feature of our approach is that it maintains a parallelism between unmarked and marked sentence types on the one hand, and basic and complex discourse effects on the other.
Received: 10 February 2017 / Accepted: 26 June 2017 / Published online: 3 August 2017 # The Author(s) 2017. This article is an open access publication Abstract This paper develops a fourth model of public engagement with science, grounded in the principle of nurturing scientific agency through participatory bioethics. It argues that social media is an effective device through which to enable such engagement, as it has the capacity to empower users and transforms audiences into co-producers of knowledge, rather than consumers of content. Social media also fosters greater engagement with the political and legal implications of science, thus promoting the value of scientific citizenship. This argument is explored by considering the case of nanoscience and nanotechnology, as an exemplar for how emerging technologies may be handled by the scientific community and science policymakers.
In an exchange with Axel Honneth and in other writings in the late 1990s, Nancy Fraser argued against privileging recognition in social and political philosophy without a concomitant consideration of the requirement for redistribution. Thus she argued for coupling the recognition of identities—racial, gender, cultural, etc.—with attention to the need for economic redistribution. In reply, Axel Honneth suggested instead that recognition itself is at the root of the theory of justice. However divergent their approaches, both theorists discussed this issue in the context of a nation-state or political society, leaving open the question of the applicability of these notions in a more global perspective. And although Fraser has recently turned to consider norms for this transnational domain, the question remains not only how to conceive the general interrelation of these two concepts of recognition and redistribution but also more specifically which sorts of differences should be recognized as playing a significant role within redistributive principles themselves or in their practical application. This problem becomes acute in the context of global justice and transnational recognition, where a multitude of differences comes into play— not only between the global south and north, but also in terms of culture, nationality, and gender, among others.
For more than twenty five years, Fine has been challenging the traditional interpretation of the violations of Bell inequalities (BI) by experiment. A natural interpretation of Fine’s theorem is that it provides us with an alternative set of assumptions on which to put the blame for the failure of the BI, and a new interpretation of the violation of the BI by experiment should follow. This is not, however, how Fine interprets his theorem. Indeed, Fine claims that his result undermines other interpretations, including the traditional interpretation in terms of local realism. The aim of this paper is to understand and to assess Fine’s claims. We distinguish three different strategies that Fine uses in order to support his interpretation of his result. We show that none of these strategies is successful. Fine fails to prove that local realism is not at stake in the violation of the BI by quantum phenomena.
Let an argument be modally valid just in case, necessarily, if its premises are true, then its conclusion is true. Propositions begins with the assumption that some arguments are modally valid. Chapter 1—‘Propositions and Modal Validity’—argues that the premises and conclusions of modally valid arguments exist necessarily, have their truth conditions essentially, and are the fundamental bearers of truth and falsity. Again, some arguments are modally valid. So there are the premises and conclusions of modally valid arguments. So there are necessarily existing fundamental bearers of truth and falsity that have their truth conditions essentially. I shall call these entities ‘propositions’. So there are propositions.
In Dasgupta (2013) I defended a relationalist view of mass. On this view mass is fundamentally relational, so that the state of a physical system vis-a-vis mass consists at bottom just in facts about mass-relationships, such as that one body is more massive than another. This is in contrast to the absolutist view that in addition to the mass-relations there are further facts about which “intrinsic” mass each body has. In my paper I discussed a number of virtues of
The counterfactual tradition to defining actual causation has come a long way since Lewis started it off. However there are still important open problems that need to be solved. One of them is the (in)transitivity of causation. Endorsing transitivity was a major source of trouble for the approach taken by Lewis, which is why currently most approaches reject it. But transitivity has never lost its appeal, and there is a large literature devoted to understanding why this is so. Starting from a survey of this work, we will develop a formal analysis of transitivity and the problems it poses for causation. This analysis provides us with a sufficient condition for causation to be transitive, a sufficient condition for dependence to be necessary for causation, and several characterisations of the transitivity of dependence. Finally, we show how this analysis leads naturally to several conditions a definition of causation should satisfy, and use those to suggest a new definition of causation.
There is a familiar philosophical position – sometimes called the doctrine of the open future – according to which future contingents (claims about underdetermined aspects of the future) systematically fail to be true. For instance: supposing that there are ways things could develop from here in which Trump is impeached, and in which he is not, it is not now true that Trump will be impeached, and not now true that Trump will not be impeached. For well over 2000 years, however, open futurists have been accused of denying certain logical laws – bivalence, excluded middle, or both – for entirely ad hoc reasons, most notably, that their denials are required for the preservation of something we hold dear. In a recent paper, however, I sought to argue that this deeply entrenched narrative ought to be overturned. My thought was this: given a popular, plausible approach to the semantics of future contingents, we can reduce the question of their status to the Russell/Strawson debate concerning presupposition failure, definite descriptions, and bivalence. In that case, we will see that open futurists in fact needn’t deny bivalence (Russell), or, if they do, they will do so for perfectly general (Strawsonian) reasons – reasons for which we all must deny bivalence. Of course, the metaphysical objections to the open futurist’s model of the future will remain just as they were. However, the millennia-old “semantic” or “logical” objections to the doctrine would be answered.
A new “voucher” program aims to shrink the US waiting list for kidney transplants (Veale, 2016). The waiting list is long, hovering in 2017 at around 95,000 (United Network for Organ Sharing, 2017). During 2016, approximately 19,000 kidney transplants took place, meeting only approximately one fifth of the demand. For patients with end stage renal disease (ESRD), transplantation has greater health benefits than dialysis, both in terms of length and quality of life (Tonelli et al, 2011). Transplantation from living donors is optimal: it tops both dialysis and transplantation from deceased donors in terms of health outcomes and cost-effectiveness (LaPointe Rudow et al, 2015, 914). The new voucher program involves live donation.
Carl Tollef Solberg and Espen Gamlund have recently suggested that in allocating scarce, life-saving resources we ought to consider how bad death would be for those who would die if left untreated (Solberg and Gamlund 2016, 8). We have moral reason, they intimate, to prioritize persons for whom death would be very bad over persons for whom it would be less bad (or not bad at all). In particular, we should in our allocation decisions consider how bad death would be for persons according to the “Time-Relative Interest Account,” developed by Jeff McMahan (Solberg and Gamlund 2016, 2).
Computer simulation of an epistemic landscape model, modified to include explicit representation of a centralised funding body, show the method of funding allocation has significant effects on communal trade-off between exploration and exploitation, with consequences for the community’s ability to generate significant truths. The results show this effect is contextual, and depends on the size of the landscape being explored, with funding that includes explicit random allocation performing significantly better than peer-review on large landscapes. The paper proposes a way of incorporating external institutional factors in formal social epistemology, and offers a way of bringing such investigations to bear on current research policy questions.
In this paper I investigate whether certain substructural theories are able to dodge paradox while at the same time containing what might be viewed as a naive validity predicate. To this end I introduce the requirement of internalization, roughly, that an adequate theory of validity should prove that its own metarules are validity-preserving. The main point of the paper is that substructural theories fail this requirement in various ways.
The author of this book is a professor of philosophy and of the classics; the book is a classicist literary history of sorts. Its novelty is in its author’s invitation to readers to argue with him on the Internet through an e-link that he provides. The book’s other novelty is its choice to view Plato more as a writer than as a philosopher—with a philosophical purpose in mind, of course. Until recently, discussions of the greatness of Plato as a philosopher eclipsed discussions of his artistic greatness as a writer. Thus, though his Symposium is a major literary masterpiece of almost unequalled loveliness, commentators on it discuss its aesthetics, tending to ignore it as art. The book at hand discusses some works of Plato as literary masterpieces while discussing a famous historical problem, namely, the Socratic problem: what part of Plato’s output expresses the opinions of his teacher Socrates? Unfortunately, the book is apologetic, and so its value is more that of a pioneering work than of a serious contribution. Its apologetic aspect shows when it skirts the unpleasant fact that whereas Socrates was a staunch defender of democracy, Plato was an elitist who preferred meritocracy.
Normative non-naturalism is the view that normativity has its source in irreducible, non-natural matters of fact. Here I use ‘normativity' broadly to include phenomena like rationality, reasons, oughts and shoulds, good and bad, right and wrong, etc. Thus, if we interpret G. E. Moore as proposing that the property of goodness is sui generis in the sense that it’s irreducible and isn’t identical to any natural property, he would count as a normative non-naturalist. And Scanlon (2014) recently defended a non-naturalist view on which the relation of being a reason for is sui generis in the same sense. Non-naturalism has also been defended recently by Oddie (2005), Parfit (2006, 2011), Wedgwood (2007), FitzPatrick (2008, 2014), and Enoch (2011).
11 August 1895 – 12 June 1980
Continuing with my Egon Pearson posts in honor of his birthday, I reblog a post by Aris Spanos: “Egon Pearson’s Neglected Contributions to Statistics“. Egon Pearson (11 August 1895 – 12 June 1980), is widely known today for his contribution in recasting of Fisher’s significance testing into the Neyman-Pearson (1933) theory of hypothesis testing. …
In a number of posts over the past several years, I’ve explored various ways to make a countably infinite fair lottery machine (assuming causal finitism is false), typically using supertasks in some way. …
It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:
• Complex adaptive system design (part 1), Azimuth, 2 October 2016. …
The Holodeck - Star Trek
There is an apple in front of me. I can see it, but I can’t touch it. The reason is that the apple is actually a 3-D rendered model of an apple. It looks like an apple, but exists only within a virtual environment — one that is projected onto the computer screen in front of me. …
The topic of unity in the sciences can be explored through the
following questions: Is there one privileged, most basic or
fundamental concept or kind of thing, and if not, how are the
different concepts or kinds of things in the universe related? Can the
various natural sciences (e.g.,physics, astronomy, chemistry, biology)
be unified into a single overarching theory, and can theories within a
single science (e.g., general relativity and quantum theory in
physics, or models of evolution and development in biology) be
unified? Are theories or models the relevant connected units? What
other connected or connecting units are there?