-
1173268.541272
I gave a talk on March 8 at an AI, Systems, and Society Conference at the Emory Center for Ethics. The organizer, Alex Tolbert (who had been a student at Virginia Tech), suggested I speak about controversies in statistics, especially P-hacking in statistical significance testing. …
-
1183580.541338
To promise something, I need to communicate something to you. What is that thing that I need to communicate to you? To a first approximation, what I need to communicate to you is that I am promising. But that’s circular: it says that promising is communicating that I am promising. …
-
1215759.541352
A neglected but challenging argument developed by Peter Geach, John Haldane, and Stephen Rothman purports to show that reproduction cannot be explained by natural selection and is irreducibly teleological. Meanwhile, the most plausible definitions of life include reproduction as a constitutive feature. The implication of combining these ideas is that life cannot be explained by natural selection and is irreducibly teleological. This does not entail that life cannot be explained in evolutionary terms of some kind, but it does lend support to the controversial view of Jerry Fodor and Thomas Nagel that evolutionists need to look beyond the constraints of Neo-Darwinism.
-
1247966.541362
Where does the Born Rule come from? We ask: “What is the simplest extension of probability theory where the Born rule appears”? This is answered by introducing “superposition events” in addition to the usual discrete events. Two-dimensional matrices (e.g., incidence matrices and density matrices) are needed to mathematically represent the differences between the two types of events. Then it is shown that those incidence and density matrices for superposition events are the (outer) products of a vector and its transpose whose components foreshadow the “amplitudes” of quantum mechanics. The squares of the components of those “amplitude” vectors yield the probabilities of the outcomes. That is how probability amplitudes and the Born Rule arise in the minimal extension of probability theory to include superposition events. This naturally extends to the full Born Rule in the Hilbert spaces over the complex numbers of quantum mechanics. It would perhaps be satisfying if probability amplitudes and the Born Rule only arose as the result of deep results in quantum mechanics (e.g., Gleason’s Theorem). But both arise in a simple extension of probability theory to include “superposition events”–which should not be too surprising since superposition is the key non-classical concept in quantum mechanics.
-
1271956.541371
In some games like Mafia, uttering falsehoods is a part of the game mechanic. These falsehoods are no more lies than falsehoods uttered by an actor in a performance are lies. Now consider a variant of poker where a player is permitted to utter falsehoods when and only when they have a Joker in hand. …
-
1299522.54138
Edith Landmann-Kalischer (1877–1951) is the author of several
significant studies on topics in the philosophy of art, aesthetics,
value, mind, and knowledge in the first half of the twentieth century. Influenced by Franz Brentano, Georg Simmel, Carl Stumpf, and Stefan
George, her studies were initiated at a time when the academic, often
tendentious borders between psychology and philosophy, like those
between aesthetics and art history, were still being drawn. While
clearly also influenced by Edmund Husserl, she takes his phenomenology
to task for its idealism and, in her view, its unfounded isolation
from the sciences, especially psychology.
-
1344597.541388
I have for a long time inclined towards ifthenism in mathematics: the idea that mathematics discovers truths of the form ``If these axioms are true, then this thing is true as well.’’
Two things have weakened my inclination to ifthenism. …
-
1372646.541397
I've been exploring in this newsletter recently how people's growing inability to understand and control the institutions that shape their lives affects their political views (see here or here for instance). …
-
1486866.541405
The critic can seem an impotent spectator, tossing in smiles and frowns from his seat while in the arena the Artist strains and strides, creating something new. But some critics, with their pen and their patronage, succeed in steering art’s development, making and breaking careers along the way. …
-
1557680.541414
Brian Leftow’s 2022 book, Anselm’s Argument: Divine Necessity is an impressively thorough discussion of Anselmian modal metaphysics, centred around what he takes to be Anselm’s strongest “argument from perfection” (Leftow’s preferred term for an Ontological Argument). This is not the famous argument from Proslogion 2, nor even the modal argument that some have claimed to find in Proslogion 3, but rather, an argument from Anselm’s Reply to Gaunilo, expressed in the following quotation: “If … something than which no greater can be thought … existed, neither actually nor in the mind could it not exist. Otherwise it would not be something than which no greater can be thought. But whatever can be thought to exist and does not exist, if it existed, would be able actually or in the mind not to exist. For this reason, if it can be thought, it cannot not exist.” (p. 66) Before turning to this argument, Leftow offers an extended and closely-argued case for understanding Anselm’s modality in terms of absolute necessity and possibility, with a metaphysical foundation on powers as argued for at length (575 pages) in his 2012 book God and Necessity. After presenting this interpretation in Chapter 1, Leftow’s second chapter discusses various theological applications (such as the fixity of the past, God’s veracity, and immortality), addressing them in a way that both expounds and defends what he takes to be Anselm’s approach. Then in Chapter 3 Leftow addresses certain problems, for both his philosophical and interpretative claims, while Chapter 4 spells out the key Anselmian argument, together with Leftow’s suggested improvements. Chapter 5 explains how the argument depends on Brouwer’s system of modal logic, and defends this while also endorsing the more standard and comprehensive system S5.
-
1558848.541422
In a recent TLS, I wrote about the spoils of pessimism—whether we should be quietists, retreating from the world, or activists who fight for it—but my real subject was despair. I did not get to write about the best book on despair I’ve read: Christian Wiman’s prose-poetic Zero at the Bone. …
-
1609487.541431
One way of defining life is via a real definition, which gives the essence of life. Another approach is an operational definition, which shows how living things can be tested or measured in a way that is distinctive of the biological. Although I give a real definition elsewhere, in this paper I provide an operational definition, echoing Canguilhem’s dictum that life is what is capable of making mistakes. Biological mistakes are central to the behaviour of organisms, their parts and sub-systems, and the collections to which they belong. I provide an informal definition of a biological mistake. I contrast mistakes with mere failures and malfunctions. Although closely related phenomena, each is distinct. After giving some brief examples of mistake-making and how it can be tested, I reply to some objections to the very idea of a biological mistake.
-
1617629.54144
At the start of the pandemic, Peter Singer and I argued that our top priority should be to learn more, fast. I feel similarly about AI, today. I’m far from an expert on the topic, so the main things I want to do in this post are to (i) share some resources that I’ve found helpful as a novice starting to learn more about the topic over the past couple months, and (ii) invite others to do likewise! …
-
1670102.541448
Let us say that a being is omnisubjective if it has a perfect first-person grasp of all subjective states (including belief states). The question of whether God is omnisubjective raises a nest of thorny issues in the philosophy of language, philosophy of mind, and metaphysics, at least if there are irreducibly subjective states. There are notorious difficulties analyzing the core traditional divine attributes—omniscience, omnipotence, and omnibenevolence—but those difficulties are notorious partly because we seem to have a decent pre-theoretic grasp of what it means for something to be all knowing, powerful, and good, and so it is surprising, frustrating, and perplexing that it is so difficult to provide a satisfactory analysis of those notions.
-
1670125.541456
What is it for an argument to be successful? Some take success to be mind-independent, holding that successful arguments are those that meet some objective criterion such as soundness. Others take success to be dialectical, holding that successful arguments are those that would convince anyone meeting certain (perhaps idealized) conditions, or perhaps some targeted audience meeting those conditions. I defend a set of desiderata for theories of success, and argue that no objective or dialectical meets those desiderata. Instead, I argue, success is individualistic: arguments can only (plausibly) be evaluated as successes (qua argument) relative to individuals. In particular, I defend The Knowledge Account, according to which an argument A is successful for individual i iff i knows A is sound and non-fallacious. This conception of success is a significant departure from orthodoxy and has interesting and unexplored philosophical and methodological implications for the evaluation of arguments.
-
1674002.541475
The theoretical developments that led to supersymmetry – first global and then local – over a period of about six years (1970/71-1976) emerged from a confluence of physical insights and mathematical methods drawn from diverse, and sometimes independent, research directions. Despite these varied origins, a common thread united them all: the pursuit of a unity in physics, grounded in the central role of symmetry, where “symmetry” is understood in terms of group theory, representation theory, algebra, and differential geometry.
-
1674020.541483
Scientific fields frequently need to exchange data to advance their own inquiries. Data unification is the process of stabilizing these forms of interfield data exchange. I present an account of the epistemic structure of data unification, drawing on case studies from model-based cognitive neuroscience (MBCN). MBCN is distinctive because it shows that modeling practices play an essential role in mediating these data exchanges. Models often serve as interfield evidential integrators, and models built for this purpose have their own representational and inferential functions. This form of data unification should be seen as autonomous from other forms, particularly explanatory unification.
-
1726699.541494
According to classical utilitarianism, well-being consists in pleasure or happiness, the good consists in the sum of well-being, and moral rightness consists in maximizing the good. Leibniz was perhaps the first to formulate this doctrine. Bentham made it widely known. For a long time, however, the second, summing part lacked any clear foundation. John Stuart Mill, Henry Sidgwick, and Richard Hare all gave arguments for utilitarianism, but they took this summing part for granted. It was John Harsanyi who finally presented compelling arguments for this controversial part of the utilitarian doctrine.
-
1736935.541502
Scientists do not merely choose to accept fully formed theories, they also have to decide which models to work on before they are fully developed and tested. Since decisive empirical evidence in favour of a model will not yet have been gathered, other criteria must play determining roles. I examine the case of modern high-energy physics where the experimental context that once favoured the pursuit of beautiful, simple, and general theories now favours the pursuit of models that are ad hoc, narrow in scope, and complex; in short, ugly models. The lack of new discoveries since the Higgs boson, together with the unlikeliness of a new higher energy collider, has left searches for new physics conceptually and empirically wide open. Physicists must make use of the experiment at hand while also creatively exploring alternatives that have not yet been explored. This encourages the pursuit of models that have at least one of two key features: i) they take radically novel approaches, or ii) are easily testable. I present three models, neutralino dark matter, the relaxion, and repulsive gravity, and show that even if they do exhibit traditional epistemic virtues, they are nonetheless pursuitworthy. I argue that experimental context strongly determines pursuitworthiness and I lay out the conditions under which experiment encourages the pursuit of ugly models.
-
1760926.541511
[Editor’s Note: The following new entry by Juliana
Bidadanure and David Axelsen replaces the
former entry
on this topic by the previous author.]
Egalitarianism is a school of thought in contemporary political
philosophy that treats equality as the chief value of a just political
system. Simply put, egalitarians argue for equality. They
have a presumption in favor of social arrangements that advance
equality, and they treat deviations from equality as prima
facie suspect. They recommend a far greater degree of equality
than we currently have, and they do so for distinctly egalitarian
reasons.
-
1847025.541519
Probabilities play an essential role in the prediction and explanation of events and thus feature prominently in well-confirmed scientific theories. However, such probabilities are frequently described as subjective, epistemic, or both. This prompts a well-known puzzle: how could scientific posits that predict and explain human-independent events essentially involve agents or knowers? I argue that the puzzle can be resolved by acknowledging that although such probabilities are non-fundamental, they may still be ontic and objective. To this end I describe dynamical mechanisms that are responsible for the convergence of probability distributions for chaotic systems, and apply an account of emergence developed elsewhere. I suggest that this analysis will generalise and claim that, consequently, a great many of the probabilities in science should be characterised in the same terms. Along the way I’ll defend a particular definition of chaos that suits the emergence analysis.
-
1847045.541527
Suppose we observe many emeralds which are all green. This observation usually provides good evidence that all emeralds are green. However, the emeralds we have observed are also all grue, which means that they are either green and already observed or blue and not yet observed. We usually do not think that our observation provides good evidence that all emeralds are grue. Why? I argue that if we are in the best case for inductive reasoning, we have reason to assign low probability to the hypothesis that all emeralds are grue before seeing any evidence. My argument appeals to random sampling and the observation-independence of green, understood as probabilistic independence of whether emeralds are green and when they are observed.
-
1847064.541536
While there has been much discussion of whether AI systems could function as moral agents or acquire sentience, there has been very little discussion of whether AI systems could have free will. I sketch a framework for thinking about this question, inspired by Daniel Dennett’s work. I argue that, to determine whether an AI system has free will, we should not look for some mysterious property, expect its underlying algorithms to be indeterministic, or ask whether the system is unpredictable. Rather, we should simply ask whether we have good explanatory reasons to view the system as an intentional agent, with the capacity for choice between alternative possibilities and control over the resulting actions. If the answer is “yes”, then the system counts as having free will in a pragmatic and diagnostically useful sense.
-
1847090.541545
trices. The main aim is to construct a system of Nmatrices by substituting standard sets by quasets. Since QST is a conservative extension of ZFA (the Zermelo-Fraenkel set theory with Atoms), it is possible to obtain generalized Nmatrices (Q-Nmatrices). Since the original formulation of QST is not completely adequate for the developments we advance here, some possible amendments to the theory are also considered. One of the most interesting traits of such an extension is the existence of complementary quasets which admit elements with undetermined membership. Such elements can be interpreted as quantum systems in superposed states. We also present a relationship of QST with the theory of Rough Sets RST, which grants the existence of models for QST formed by rough sets. Some consequences of the given formalism for the relation of logical consequence are also analysed.
-
1934035.541553
Organ sale – for example, allowing or encouraging consenting
adults to become living kidney donors in return for money – has
been proposed as a possible solution to the seemingly chronic shortage
of organs for transplantation. Many people however regard this idea as
abhorrent and argue both that the practice would be unethical and that
it should be banned. This entry outlines some of the different
possible kinds of organ sale, briefly states the case in favour, and
then examines the main arguments against.
-
1962375.541561
This paper concerns the question of which collections of general relativistic space-times are deterministic relative to which definitions. We begin by considering a series of three definitions of increasing strength due to Belot (1995). The strongest of these definitions is particularly interesting for spacetime theories because it involves an asymmetry condition called “rigidity” that has been studied previously in a different context (Geroch 1969; Halvorson and Manchak 2022; Dewar 2024). We go on to explore other (stronger) asymmetry conditions that give rise to other (stronger) forms of determinism. We introduce a number of definitions of this type and clarify the relationships between them and the three considered by Belot. We go on to show that there are collections of general relativistic spacetimes that satisfy much stronger forms of determinism than previously known. We also highlight a number of open questions.
-
1962395.541576
Determinism is the thesis that the past determines the future, but eorts to dene it precisely have exposed deep methodological disagreements. Standard possible-worlds formulations of determinism presuppose an "agreement" relation between worlds, but this relation can be understood in multiple ways none of which is particularly clear. We critically examine the proliferation of denitions of determinism in the recent literature, arguing that these denitions fail to deliver clear verdicts about actual scientic theories. We advocate a return to a formal approach, in the logical tradition of Carnap, that treats determinism as a property of scientic theories, rather than an elusive metaphysical doctrine. We highlight two key distinctions: (1) the dierence between qualitative and "full" determinism, as emphasized in recent discussions of physics and metaphysics, and (2) the distinction between weak and strong formal conditions on the uniqueness of world extensions. We argue that dening determinism in terms of metaphysical notions such as haecceities is unhelpful, whereas rigorous formal criteria such as Belot's D1 and D3 oer a tractable and scientically relevant account. By clarifying what it means for a theory to be deterministic, we set the stage for a fruitful interaction between physics and metaphysics.
-
1962416.541587
The idea that the universe is governed by laws of nature has precursors from ancient times, but the view that it is a or even the primary - or even the primary - aim of science to discover these laws only became established during the 16th and 17th century when it replaced the then prevalent Aristotelian conception of science. The most prominent promoters and developers of the new view were Galileo, Descartes, and Newton. Descartes, in Le Monde dreamed of an elegant mathematical theory that specified laws that describe the motions of matter and Newton in his Principia went a long way towards realizing this dream.
-
1962439.541598
This paper considers the mundane ways in which AI is being incorporated into scientific practice today, and particularly the extent to which AI is used to automate tasks perceived to be boring, “mere routine” and inconvenient to researchers. We label such uses as instances of “Convenience AI” — that is situations where AI is applied with the primary intention to increase speed and minimize human effort. We outline how attributions of convenience to AI applications involve three key characteristics: (i) an emphasis on speed and ease of action, (ii) a comparative element, as well as (iii) a subject-dependent and subjective quality. Using examples from medical science and development economics, we highlight epistemic benefits, complications, and drawbacks of Convenience AI along these three dimensions. While the pursuit of convenience through AI can save precious time and resources as well as give rise to novel forms of inquiry, our analysis underscores how the uncritical adoption of Convenience AI for the sake of shortcutting human labour may also weaken the evidential foundations of science and generate inertia in how research is planned, set-up and conducted, with potentially damaging implications for the knowledge being produced. Critically, we argue that the consistent association of Convenience AI with the goals of productivity, efficiency, and ease, as often promoted also by companies targeting the research market for AI applications, can lower critical scrutiny of research processes and shift focus away from appreciating their broader epistemic and social implications.
-
1963238.541608
People are often surprisingly hostile to the very idea of moral optimizing, presumably because it’s more gratifying to simply act on vibes and emotional appeal (or they don’t want to be on the hook for moral verdicts that go against their personal interests). …