-
5315.948859
How to explain the Aharonov-Bohm (AB) effect remains deeply controversial, particularly regarding the tension between locality and gauge invariance. Recently Wallace argued that the AB effect can be explained in a local and gauge-invariant way by using the unitary gauge. In this paper, I present a critical analysis of Wallace’s intriguing argument. First, I show that the unitary gauge transforms the Schrodinger equation into the Madelung equations, which are expressed entirely in terms of local and gauge-invariant quantities. Next, I point out that an additional quantization condition needs to be imposed in order that the Madelung equations are equivalent to the Schrodinger equation, while the quantization condition is inherently nonlocal. Finally, I argue that the Madelung equations with the quantization condition can hardly explain the the AB effect, even if in a nonlocal way. This analysis suggests that the unitary gauge does not resolve the tension between locality and gauge invariance in explaining the AB effect, but highlights again the profound conceptual challenges in reconciling the AB effect with a local and gauge-invariant framework.
-
5335.948917
The geometry of the universe is today widely believed to be flat based on combined data obtained during the 2000s. Prior to this, the geometry of the universe was essentially unknown. However, within the relevant literature one finds claims indicating a strong preference for a (nearly) closed universe, based on philosophical and other “non-experimental” reasons. The main aim of this article is to identify these reasons and assess the extent to which philosophical reasoning influenced the establishment of the dark matter hypothesis and the development of models for a closed universe. Building on groundwork laid by de Swart (2020), this study expands the discussion by (a) arguing that opinions on the geometry of the universe during the 1970s and 1980s were more divided than often assumed, (b) uncovering a lesser-known Machian argument for flat geometry proposed by Dennis Sciama, and (c) presenting a fine-tuning argument stemming from the ‘coincidence problem’ articulated by Robert Dicke. The study provides a nuanced perspective on how philosophical considerations contributed to shaping early views on cosmology and dark matter and highlights the significant role philosophical reasoning can play in guiding scientific inquiry in physics.
-
5354.948926
We introduce what we call the paradox of self consultation: This is the question of how apriori inquirers, like philosophers, mathematicians, and linguists, are able to (successfully) investigate matters of which they are initially ignorant by systematically questioning themselves. A related phenomenon is multiple grades of access: We find it extremely hard to think up analyses of our concepts that do not suffer from counterexamples; moderately hard to think up counterexamples to proposed analyses; and trivial to verify that a provided counterexample is genuine. We consider a range of potential explanations, including two-system approaches, and show why they are unsatisfactory, despite being on the right track. We then proceed to give a naturalistic solution to the paradox and multiple grades of access. In doing so, we present a novel theory of epistemic work, which we connect to formal learning theory.
-
5373.948932
Several philosophers of science have taken inspiration from biological research on niches to conceptualise scientific practice. We systematise and extend three niche-based theories of scientific practice: conceptual ecology, cognitive niche construction, and scientific niche construction. We argue that research niches are a promising conceptual tool for understanding complex and dynamic research environments, which helps to investigate relevant forms of agency and material and social interdependencies, while also highlighting their historical and dynamic nature. To illustrate this, we develop a six-point framework for conceptualising research niches. Within this framework, research niches incorporate multiple and heterogenous material, social and conceptual factors (multi-dimensionality); research outputs arise, persist and differentiate through interactions between researchers and research niches (processes); researchers actively respond to and construct research niches (agency); research niches enable certain interactions and processes and not others (capability); and research niches are defined in relation to particular entities, such as individual researchers, disciplines, or concepts (relationality), and in relation to goals, such as understanding, solving problems, intervention, or the persistence of concepts or instruments (normativity).
-
5393.948941
This paper is about a problem which arose in mathematics but is now widely considered by mathematicians to be a matter “merely” for philosophy. I want to show what philosophy can contribute to solving the problem by returning it to mathematics, and I will do that by elucidating what it is to be a solution to a mathematical problem at all.
-
46612.948947
We can use a Mahatma Ghandi or a Mother Teresa as a moral exemplar to figure out what our virtues should be. But we cannot use an Usain Bolt or a Serena Williams as a physical exemplar to figure out what our physical capabilities should be. …
-
69707.948953
Preliminary Note: The following is very speculative! I’ve been writing occasionally on AI here, especially about how the advent of AI may change our conception of ourselves as agents (here, here, and here). …
-
117601.948958
Angelic visitations in our world are at best rare, and at worst they never occur at all. Not so in Neil Fisk’s world. There, angelic visitations are common – and often deadly. Neil lost his wife to such a visitation, and he’s hated God ever since. The problem with this hatred is that Neil is quite sure his wife is in heaven, as he saw her soul ascending and has never seen her walking around in hell during the frequent glimpses the living are given of the underworld. Since Neil thinks he cannot willingly become devout, he must rely on a divine glitch; those who are caught in heaven’s light during an angelic visitation involuntarily become devout, and thus go to heaven. Luckily for Neil, he drives into a beam of heaven’s light, loses his sight, and becomes devout. Unluckily for Neil, God sends him to hell anyway.
-
118379.948965
I consider applications of “AI extenders” to dementia care. AI extenders are AI-powered technologies that extend minds in ways interestingly different from old-school tech like notebooks, sketch pads, models, and microscopes. I focus on AI extenders as ambiance: so thoroughly embedded into things and spaces that they fade from view and become part of a subject’s taken-for-granted background. Using dementia care as a case study, I argue that ambient AI extenders are promising because they afford richer and more durable forms of multidimensional integration than do old-school extenders like Otto’s notebook. They can be tailored, in fine-grained ways along multiple timescales, to a user’s particular needs, values, and preferences—and crucially, they can do much of this self-optimizing on their own. I discuss why this is so, why it matters, and its potential impact on affect and agency. I conclude with some worries in need of further discussion.
-
120707.948971
The article summarizes the present state of research into the conceptual foundations of the periodic table. We give a brief historical account of the development of the periodic table and periodic system, including the impact of modern physics due to the discoveries of Moseley, Bohr, modern quantum mechanics etc. The role of the periodic table in the debate over the reduction of chemistry is discussed, including the attempts to derive the Madelung rule from first principles. Other current debates concern the concept of an “element” and its dual role as simple substance and elementary substance and the question of whether elements and groups of elements constitute natural kinds. The second of these issues bears on the question of further debates concerning the placement of certain elements like H, He, La and Ac in the periodic table.
-
122190.948977
Discussions on the compositionality of inferential roles concentrate on extralogical vocabulary. However, there are nontrivial problems concerning the compositionality of sentences formed by the standard constants of propositional logic. For example, is the inferential role of AB uniquely determined by those of A and B? And how is it determined? This paper investigates such questions. We also show that these issues raise matters of more significance than may prima facie appear.
-
139257.948982
A recent interviewer asked Tyler Cowen to explain falling birth rates, and he puckishly responded, “Do you have kids?” His point: Anyone who knows what kids are actually like can instantly understand why adults are reluctant to have them. …
-
213555.948988
A few years ago, scientists feared they’d lose their jobs if they said anything against diversity programs. I was against that. Now scientists fear they’ll lose their jobs if they say anything for diversity programs. …
-
293706.948993
The advancement of and prospects for stem cell research raise a number of specific ethical issues. While navigating the ethical landscape of stem cell research is often challenging for biology researchers and biotechnology innovators, it is also difficult for the public and other persons of concern (from ethicists to policymakers) to grasp the technicalities of a burgeoning field that develops in many directions. Organoids are one of these new biotechnological constructs that are currently eliciting a rich debate in bioethics. In this guide, we argue that different types of organoids have different emerging properties with different ethical implications. Going from general properties to particular ones, we propose a typology of organoid technology and other associated biotechnology from a philosophical and ethical perspective. We point to relevant ethical issues and try to convey the sense of uncertainty peculiar to ongoing research and emerging technological objects.
-
293724.948998
Spacetime singularities are expected to disappear in quantum gravity. Singularity resolution prima facie supports the view that spacetime singularities are mathematical pathologies of general relativity. However, this conclusion might be premature. Spacetime singularities are more accurately understood as global properties of spacetime, rather than things. Therefore, if spacetime emerges in quantum gravity – as it is often claimed – then so may its singular structure. Although this proposal is intriguing, the attempt to uphold that spacetime singularities may be emergent fails. I provide three arguments in support of this claim, drawing upon different views on spacetime emergence.
-
351366.949009
The range of animal practices potentially classified as medical varies widely both functionally and mechanistically, and there is no agreed upon definition of medicine that can help determine which cases ought to count as such. In this paper, we argue that all available definitions are fatally flawed and defend our own characterisation of medicine, which incorporates both functional and mechanistic constraints. We apply our definition to the available evidence and determine which animal behaviours show a mere difference of degree with paradigmatic medical practices—and should thus be seen as medicine proper—and which should be excluded from this nomenclature.
-
351390.949015
Motivated by the question about whether we should be realists about dark matter, I propose a new articulation of the debate between the scientific realist and anti-realist. I defend three claims. First, that the debate should be articulated in normative terms, where questions about normativity are understood as being questions about authority. Second, that positions in this debate should be defended using pragmatist strategies, where pragmatist strategies are understood as being agent-first strategies. Third, that the manner of implementation of a pragmatist strategy with respect to some scientific-theoretical vocabulary—such as ‘dark matter’—is highly domain-specific and turns on choices made by agents about what (and whom) they recognise as authoritative.
-
380868.94902
[Editor’s Note: The following new entry by Mark Wrathall replaces the
former entry
on this topic by the previous author.]
Martin Heidegger (1889–1976) is a central figure in the
development of twentieth-century European Philosophy. His magnum
opus, Being and Time (1927), and his many essays and
lectures, profoundly influenced subsequent movements in European
philosophy, including Hannah Arendt’s political philosophy,
Jean-Paul Sartre’s existentialism, Simone de Beauvoir’s
feminism, Maurice Merleau-Ponty’s phenomenology of perception,
Hans-Georg Gadamer’s hermeneutics, Jacques Derrida’s
deconstruction, Michel Foucault’s post-structuralism, Gilles
Deleuze’s metaphysics, the Frankfurt School, and critical
theorists like Theodor Adorno, Herbert Marcuse, Jürgen Habermas,
and Georg Lukács.
-
401944.949025
[See my earlier posts in this series—Modal Rationalism intro and Chapter One: Kripke vs 2-D Semantics—for essential background.] TL;DR: You might think that there are multiple ideally conceivable ways that the space of possibilities might turn out. …
-
402999.94903
The making of mistakes by organisms and other living systems is a theoretically and empirically unifying feature of biological investigation. Mistake theory is a rigorous and experimentally productive way of understanding this widespread phenomenon. It does, however, run up against the long-standing ‘functions’ debate in philosophy of biology. Against the objection that mistakes are just a kind of malfunction, and that without a position on functions there can be no theory of mistakes, we reply that this is to misunderstand the theory. In this paper we set out the basic concepts of mistake theory and then argue that mistakes are a distinctive phenomenon in their own right, not just a kind of malfunction. Moreover, the functions debate is, to a large degree, independent of the concept of biological mistakes we outline. In particular, although the popular selected effects theory may retain its place within a more pluralistic conception of biological function, there is also need for a more forward-looking approach, where a robust concept of normativity can be an important driver of future experimental work.
-
440828.949038
laying down a program for this study. It is written for everyone who is curious about the world of symbols that surrounds us, in particular researchers and students in philosophy, history, cognitive science, and mathematics education. The main characteristics of mathematical notations are introduced and discussed in relation to the intended subject matter, the language in which the notations are verbalized, the cognitive resources needed for learning and understanding them, the tasks that they are used for, their material basis, and the historical context in which they are situated. Specific criteria for the design and assessment of notations are discussed, as well as ontological, epistemological, and methodological questions that arise from the study of mathematical notations and of their use in mathematical practice.
-
483341.949046
Believers in teleology also tend to believe in a distinction between the normal and the abnormal. I think teleology can be prised apart from a normal/abnormal distinction, however, if we do something that I think we should do for independent reasons: recognize teleological directedness without a telos-to-be-attained, a target to be hit. …
-
523571.949052
While the traditional conception of inductive logic is Carnapian, I develop a Peircean alternative and use it to unify formal learning theory, statistics, and a significant part of machine learning: supervised learning. Some crucial standards for evaluating non-deductive inferences have been assumed separately in those areas, but can actually be justified by a unifying principle.
-
559814.949057
This paper is a contribution to a symposium on Herman Cappelen’s 2023 book The Concept of Democracy: An Essay on Conceptual Amelioration and Abandonment. In that book, Cappelen develops a theory of abandonment—a theory of why and how to completely stop using particular linguistic expressions—and then uses that theory to argue for the general abandonment of the words “democracy” and “democratic”. In this paper, I critically discuss Cappelen’s arguments for the abandonment of “democracy” and “democratic” in political theory specifically.
-
577769.949063
Incurvati and Schlöder (Journal of Philosophical Logic, 51(6), 1549–1582, 2022) have recently proposed to define supervaluationist logic in a multilateral framework, and claimed that this defuses well-known objections concerning supervaluationism’s apparent departures from classical logic. However, we note that the unconventional multilateral syntax prevents a straightforward comparison of inference rules of different levels, across multi- and unilateral languages. This leaves it unclear how the supervaluationist multilateral logics actually relate to classical logic, and raises questions about Incurvati and Schlöder’s response to the objections. We overcome the obstacle, by developing a general method for comparisons of strength between multi-and unilateral logics. We apply it to establish precisely on which inferential levels the supervaluationist multilateral logics defined by Incurvati and Schlöder are classical. Furthermore, we prove general limits on how classical a multilateral logic can be while remaining supervaluationistically acceptable. Multilateral supervaluationism leads to sentential logic being classical on the levels of theorems and regular inferences, but necessarily strictly weaker on meta- and higher-levels, while in a first-order language with identity, even some classical theorems and inferences must be forfeited. Moreover, the results allow us to fill in the gaps of Incurvati and Schlöder’s strategy for defusing the relevant objections.
-
577878.949068
We all perform experiments very often. When I hear a noise and deliberately turn my head, I perform an experiment to find out what I will see if I turn my head. If I ask a question not knowing what answer I will hear, I am engaging in (human!) …
-
580851.949074
There’s something deeply wrong with the world, when the median US college graduate’s starting salary is a dozen times higher than the price to save another person’s entire life. The enduring presence of such low-hanging fruit reflects a basic societal failure to allocate resources in a way that reflects valuing those lives appropriately. …
-
581990.949079
There is no doubt that a theory that is unified has a certain appeal. Scientific practice in fundamental physics relies heavily on it. But is a unified theory more likely to be empirically adequate than a non-unified theory? Myrvold has pointed out that, on a Bayesian account, only a specific form of unification, which he calls mutual information unification, can have confirmatory value. In this paper, we argue that Myrvold’s analysis suffers from an overly narrow understanding of what counts as evidence. If one frames evidence in a way that includes observations beyond the theory’s intended domain, one finds a much richer and more interesting perspective on the connection between unification and theory confirmation. By adopting this strategy, we give a Bayesian account of unification that (i) goes beyond mutual information unification to include other cases of unification, and (ii) gives a crucial role to the element of surprise in the discovery of a unified theory. We illustrate the explanatory strength of this account with some cases from fundamental physics and other disciplines.
-
582010.949085
Although the electron density can be calculated with the formal resources of quantum mechanics, in physics it does not play the leading role that the quantum state does. In contrast, the concept of electron density is central in quantum chemistry, in any of its different approaches: the Hartree-Fock Method, the Density Functional Theory, and the Quantum Theory of Atoms in Molecules.
-
582036.94909
Bell’s conclusion from his famous inequality was that any hidden variable theory that satisfies Local Causality is incompatible with the predictions of Quantum Mechanics (QM) for Bell’s Experiment. However, Local Causality does not appear in the derivation of Bell’s inequality. Instead, two other assumptions are used, namely Factorizability and Settings Independence. Therefore, in order to establish Bell’s conclusion, we need to relate these two assumptions to Local Causality. The prospects for doing so turn out to depend on the assumed location of the hidden states that appear in Bell’s inequality. In this paper, I consider the following two views on such states: (1) that they are states of the two-particle system at the moment of preparation, and (2) that they are states of thick slices of the past light cones of measurements. I argue that straightforward attempts to establish Bell’s conclusion fail in both approaches. Then, I consider three refined attempts, which I also criticise, and I propose a new way of establishing Bell’s conclusion that combines intuitions underlying several previous approaches.