-
38969.851291
This paper concerns the question of which collections of general relativistic space-times are deterministic relative to which definitions. We begin by considering a series of three definitions of increasing strength due to Belot (1995). The strongest of these definitions is particularly interesting for spacetime theories because it involves an asymmetry condition called “rigidity” that has been studied previously in a different context (Geroch 1969; Halvorson and Manchak 2022; Dewar 2024). We go on to explore other (stronger) asymmetry conditions that give rise to other (stronger) forms of determinism. We introduce a number of definitions of this type and clarify the relationships between them and the three considered by Belot. We go on to show that there are collections of general relativistic spacetimes that satisfy much stronger forms of determinism than previously known. We also highlight a number of open questions.
-
38989.851362
Determinism is the thesis that the past determines the future, but eorts to dene it precisely have exposed deep methodological disagreements. Standard possible-worlds formulations of determinism presuppose an "agreement" relation between worlds, but this relation can be understood in multiple ways none of which is particularly clear. We critically examine the proliferation of denitions of determinism in the recent literature, arguing that these denitions fail to deliver clear verdicts about actual scientic theories. We advocate a return to a formal approach, in the logical tradition of Carnap, that treats determinism as a property of scientic theories, rather than an elusive metaphysical doctrine. We highlight two key distinctions: (1) the dierence between qualitative and "full" determinism, as emphasized in recent discussions of physics and metaphysics, and (2) the distinction between weak and strong formal conditions on the uniqueness of world extensions. We argue that dening determinism in terms of metaphysical notions such as haecceities is unhelpful, whereas rigorous formal criteria such as Belot's D1 and D3 oer a tractable and scientically relevant account. By clarifying what it means for a theory to be deterministic, we set the stage for a fruitful interaction between physics and metaphysics.
-
39010.851375
The idea that the universe is governed by laws of nature has precursors from ancient times, but the view that it is a or even the primary - or even the primary - aim of science to discover these laws only became established during the 16th and 17th century when it replaced the then prevalent Aristotelian conception of science. The most prominent promoters and developers of the new view were Galileo, Descartes, and Newton. Descartes, in Le Monde dreamed of an elegant mathematical theory that specified laws that describe the motions of matter and Newton in his Principia went a long way towards realizing this dream.
-
39033.85139
This paper considers the mundane ways in which AI is being incorporated into scientific practice today, and particularly the extent to which AI is used to automate tasks perceived to be boring, “mere routine” and inconvenient to researchers. We label such uses as instances of “Convenience AI” — that is situations where AI is applied with the primary intention to increase speed and minimize human effort. We outline how attributions of convenience to AI applications involve three key characteristics: (i) an emphasis on speed and ease of action, (ii) a comparative element, as well as (iii) a subject-dependent and subjective quality. Using examples from medical science and development economics, we highlight epistemic benefits, complications, and drawbacks of Convenience AI along these three dimensions. While the pursuit of convenience through AI can save precious time and resources as well as give rise to novel forms of inquiry, our analysis underscores how the uncritical adoption of Convenience AI for the sake of shortcutting human labour may also weaken the evidential foundations of science and generate inertia in how research is planned, set-up and conducted, with potentially damaging implications for the knowledge being produced. Critically, we argue that the consistent association of Convenience AI with the goals of productivity, efficiency, and ease, as often promoted also by companies targeting the research market for AI applications, can lower critical scrutiny of research processes and shift focus away from appreciating their broader epistemic and social implications.
-
155903.851401
A. I guess because I'm exploring the format in some of my own writing. Q. A. It's not ready to show to anyone. In fact the project is more notional than actual—a few notes in a plain text file, which I peek at from time to time. …
-
204027.85141
Two problems are investigated. Why is it that in his solutions to logical problems, Boole’s logical/numerical operations can be difficult to pin down, and why did his late manuscript attempt to get rid of division by zero fall short of that goal? It is suggested that the former is due to different readings that he gives to the operations according to the stage of the solution routine, and the latter is due to a strict confinement to equational reasoning.
-
211997.851424
Following the lead of heterogeneous and invariably brilliant thinkers as Thucydides, Arnold J. Toynbee, Winston Churchill, Carl Sagan, Philip K. Dick, and Niall Ferguson, I consider a virtual history – or an alternative Everettian branch of the universal wavefunction – in which the ancient materialism and atomism of Epicurus (and heliocentrism of Aristarchus, for good measure) have prevailed over the (Neo) Platonist-Aristotelian religious-military complex. Such a historical swerve (pun fully intended) would have removed the unhealthy obsession with mind-body dualism and dialectics, which crippled much of the European thought throughout the last millennium. It is at least open to serious questioning whether quasireligious totalitarian ideologies could have arisen and brought about so much death, suffering and pain in this virtual history as they did in our actual history.
-
212016.851433
There’s a certain mindset that some people have when they think about fundamental physics and the world of middle-sized dry goods. The mindset is that the middle-sized stuff is somehow “less real” than the stuff that physics describes — elementary particles, quantum fields, etc. There are quite a few philosophers, and some scientists, who hold this view with great conviction, and whose research is driven by a desire to validate it. There are other people who have a completely different attitude about reductionism: they see it as the enemy of the good and beautiful, and as a force to be stopped. The worries of the anti-reductionists do seem to be well-motivated. If, for example, your wife is nothing more than some quantum fields in a certain state, then why vouchsafe her your eternal and undying love? More generally, is the existence of trees, horses, or our own children nothing more than a convenient fiction that biology or religion has tricked us into believing? If physics shows that these things are not fully real, how should we then live?
-
212037.851442
It has been argued that non-epistemic values have legitimate roles to play in the classification of psychiatric disorders. Such a value-laden view on psychiatric classification raises questions about the extent to which expert disagreements over psychiatric classification are fueled by disagreements over value judgments and the extent to which these disagreements could be resolved. This paper addresses these questions by arguing for two theses. First, a major source of disagreements about psychiatric classification is factual and concerns what social consequences a classification decision will have. This type of disagreement can be addressed by empirical research, although obtaining and evaluating relevant empirical evidence often requires interdisciplinary collaboration.
-
212056.85145
The Hard Problem of consciousness—explaining why and how physical processes are accompanied by subjective experience—remains one of the most challenging puzzles in modern thought. Rather than attempting to resolve this issue outright, in this paper I explore whether empirical science can be broadened to incorporate consciousness as a fundamental degree of freedom. Drawing on Russellian monism and revisiting the historical “relegation problem” (the systematic sidelining of consciousness by the scientific revolution), I propose an extension of quantum mechanics by augmenting the Hilbert space with a “consciousness dimension.” This framework provides a basis for reinterpreting psi phenomena (e.g., telepathy, precognition) as natural outcomes of quantum nonlocality and suggests that advanced non– human intelligence (NHI) technology might interface with a quantum–conscious substrate.
-
241294.851459
The Aristotelian corpus (corpus aristotelicum) is the
collection of the extant works transmitted under the name of Aristotle
along with its organizational features, such as its ordering, internal
textual divisions (into books and chapters) and titles. It has evolved
over time: Aristotelian treatises have sometimes been lost and
sometimes recovered, “spurious” works now regarded as
inauthentic have joined the collection while scribes and scholars were
attempting to organize its massive amount of text in various ways. The
texts it includes are highly technical treatises that were not
originally intended for publication and first circulated within
Aristotle’s philosophical circle only, Aristotle distinguishes
them from his “exoteric” works (Pol.
1278b30; EE 1217b22,
1218b34) which were meant for a wider audience.
-
374294.851467
In A New Logic, a New Information Measure, and a New Information-Based Approach to Interpreting Quantum Mechanics [13], David Ellerman argues that the essence of the mathematics of quantum mechanics is the linearized Hilbert space version of the mathematics of partitions. In his article, Ellerman lays out the key mathematical concepts involved in the progression from logic, to logical information, to quantum theory—of distinctions versus indistinctions, definiteness versus indefiniteness, or distinguishability versus indistinguishability, which he argues run throughout the mathematics of quantum mechanics.
-
385031.851478
Does science have any aim(s)? If not, does it follow that the debate about scientific progress is somehow misguided or problematically non-objective? These are two of the central questions posed in Rowbottom’s Scientific Progress. In this paper, I argue that we should answer both questions in the negative. Science probably has no aims, certainly not a single aim; but it does not follow from this that the debate about scientific progress is somehow misguided or problematically non-objective.
-
385054.851488
This paper examines the tension between the growing algorithmic control in safety-critical societal contexts—motivated by human cognitive fallibility—and the rise of probabilistic types of AI, primarily in the form of Large Language Models (LLMs). Although both human cognition and LLMs exhibit inherent uncertainty and occasional unreliability, some futurist visions of the “Singularity” paradoxically advocate relinquishing control of the main societal processes–including critical ones–to these probabilistic AI agents, heightening the risks of a resulting unpredictable or “whimsical” governance. As an alternative, a “mediated control” framework is proposed here: a more prudent alternative wherein LLM-AGIs are strategically employed as “meta-programmers” to design sophisticated–but fundamentally deterministic–algorithms and procedures, or, in general, powerful rule-based solutions. It is these algorithms or procedures, executed on classical computing infrastructure and under human oversight, the systems to be deployed–based on human deliberative decision processes–as the actual controllers of critical systems and processes. This constitutes a way to harness AGI creativity for algorithmic innovation while maintaining essential reliability, predictability, and human accountability of the processes controlled by the algorithms so produced. The framework emphasizes a division of labor between the LLM-AGI and the algorithms it devises, a rigorous verification and validation protocols as conditions for safe algorithm generation, and a mediated application of the algorithms. Such an approach is not a guaranteed solution to the challenges of advanced AI, but–it is argued–it offers a more human-aligned, risk-mitigated, and ultimately more beneficial path towards integrating AGI into societal governance, possibly leading to a safer future, while preserving essential domains of human freedom and agency.
-
385101.851499
In current philosophy of science, extrapolation is seen as an inference from a study to a distinct target system of interest. The reliability of such an inference is generally thought to depend on the extent to which study and target are similar in relevant respects, which is especially problematic when they are heterogeneous. This paper argues that this understanding is underdeveloped when applied to extrapolation in ecology. Extrapolation in ecology is not always well characterized as an inference from a model to a distinct target but often includes inferences from small-scale experimental systems to large-scale processes in nature, i.e., inferences across spatiotemporal scales. For this reason, I introduce a distinction between compositional and spatiotemporal variability. Whereas the former describes differences in entities and causal factors between model and target, the latter refers to the variability of a system over space and time. The central claim of this paper is that our understanding of heterogeneity needs to be expanded to explicitly include spatiotemporal variability and its effects on extrapolation across spatiotemporal scales.
-
475653.851509
In a reference letter for Feyerabend’s application to UC Berkeley, Carl Hempel writes that ‘Mr. Feyerabend combines a forceful and penetrating analytic mind with a remarkably thorough training and high competence in theoretical physics and mathematics’ (Collodel and Oberheim, unpublished, 80). Similarly, Rudolf Carnap says of Feyerabend that he ‘knows both the physics and the philosophy thoroughly, and he is particularly well versed in the fundamental logical and epistemological problems of physics’ (83). These remarks echo a sentiment widely accepted amongst Feyerabend’s colleagues that his knowledge of physics was at an extremely high level. Feyerabend’s acumen in physics goes back to his youth, when, at the age of 13, he was offered a position as an observer at the Swiss Institute for Solar Research after building his own telescope (Feyerabend 1995, 27). It is unsurprising, therefore, that physics played an important and long-lasting role in Feyerabend’s work.
-
476988.851519
The desirable gambles framework provides a foundational approach to imprecise probability theory but relies heavily on linear utility assumptions. This paper introduces function-coherent gambles, a generalization that accommodates non-linear utility while preserving essential rationality properties. We establish core axioms for function-coherence and prove a representation theorem that characterizes acceptable gambles through continuous linear functionals. The framework is then applied to analyze various forms of discounting in intertemporal choice, including hyperbolic, quasi-hyperbolic, scale-dependent, and state-dependent discounting. We demonstrate how these alternatives to constant-rate exponential discounting can be integrated within the function-coherent framework. This unified treatment provides theoretical foundations for modeling sophisticated patterns of time preference within the desirability paradigm, bridging a gap between normative theory and observed behavior in intertemporal decision-making under genuine uncertainty.
-
532741.851528
Please enjoy Harvard’s Jacob Barandes and yours truly duking it out for 2.5 hours on YouTube about the interpretation of quantum mechanics, and specifically Jacob’s recent proposal involving “indivisible stochastic dynamics,” with Curt Jaimungal as moderator. …
-
558064.851536
Although several accounts of scientific understanding exist, the concept of understanding in relation to technology remains underexplored. This paper addresses this gap by proposing a philosophical account of technological understanding—the type of understanding that is required for and reflected by successfully designing and using technological artefacts. We develop this notion by building on the concept of scientific understanding. Drawing on parallels between science and technology, and specifically between scientific theories and technological artefacts, we extend the idea of scientific understanding into the realm of technology. We argue that, just as scientific understanding involves the ability to explain a phenomenon using a theory, technological understanding involves the ability to use a technological artefact to realise a practical aim.
-
558089.851546
Physics not only describes past, present, and future events but also accounts for unrealized possibilities. These possibilities are represented through the solution spaces given by theories. These spaces are typically classified into two categories: kinematical and dynamical. The distinction raises important questions about the nature of physical possibility. How should we interpret the difference between kinematical and dynamical models? Do dynamical solutions represent genuine possibilities in the physical world? Should kinematical possibilities be viewed as mere logical or linguistic constructs, devoid of a deeper connection to the structure of physical reality? This chapter addresses these questions by analyzing some of the most significant theories in physics: classical mechanics, general relativity and quantum mechanics, with a final mention to quantum gravity. We argue that only dynamical models correspond to genuine physical possibilities.
-
558107.851557
This paper examines cases in which an individual’s misunderstanding improves the scientific community’s understanding via “corrective” processes that produce understanding from poor epistemic inputs. To highlight the unique features of valuable misunderstandings and corrective processes, we contrast them with other social-epistemological phenomena including testimonial understanding, collective understanding, Longino’s critical contextual empiricism, and knowledge from falsehoods.
-
558131.851566
I argue that John Norton’s notions of empirical, hypothetical, and counterfactual possibility canbe successfully used to analyze counterintuitive examples of physical possibility and align better with modal intuitions of practicing physicists. First, I clarify the relationship between Norton’s possibility notions and the received view of logical and physical possibility. In particular, I argue that Norton’s empirical, hypothetical, and counterfactual possibility cannot coincide with the received view of physical possibility; instead, the received view of physical possibility is a special case of Norton’s logical possibility. I illustrate my claims using examples from Classical Mechanics, General Relativity, and Quantum Mechanics. I then arrive at my conclusions by subsuming Norton’s empirical, hypothetical, and counterfactual possibilities under a single concept of conditional inductive possibility and by Philosophy analyzing the types and degrees of strengths that can be associated with it.
-
558151.851576
A critique is given of the attempt by Hettema and Kuipers to formalize the periodic table. In particular I dispute their notions of identifying a naïve periodic table with tables having a constant periodicity of eight elements and their views on the different conceptions of the atom by chemists and physicists. The views of Hettema and Kuipers on the reduction of the periodic system to atomic physics are also considered critically.
-
558169.851586
Whereas most scientists are highly critical of constructivism and relativism in the context of scientific knowledge acquisition, the dominant school of chemical education researchers appears to support a variety of such positions. By reference to the views of Herron, Spencer, and Bodner, I claim that these authors are philosophically confused, and that they are presenting a damaging and anti-scientific message to other unsuspecting educators. Part of the problem, as I argue, is a failure to distinguish between pedagogical con - structivism regarding students' understanding of science, and constructivism about the way that scientific knowledge is acquired by expert scientists.
-
615894.851596
De Haro, S. [2025]: ‘James Read’s Background Independence in Classical and Quantum Gravity’, BJPS Review of Books, 2025, https://doi.org/10.59350/693wk-sqn26 Background-independence has been a much-debated topic in spacetime theories. One of the main lessons of the general theory of relativity is that spacetime is not xed, as in Newton’s theory, but is dynamical. Since the shape of a spacetime depends on its matter content, the relation between geometry and matter is dynamic. Thus there is no privileged spacetime background on which physics is to be done; unlike the cases of Newtonian space and time, and special relativity’s Minkowski spacetime.
-
615930.851619
When is C a cause of E? Many traditional approaches to causation imply that the answer to this question must be of the form ‘C is a cause of E if and only if X’, where X is supposed to provide necessary and su cient conditions for C’s being a cause of E, while itself not relying on causal notions. This reductive approach to causation has led to various valuable insights. However, some philosophers have always been sceptical that such an analysis is possible and, especially in the last two decades, the hope that philosophers could eventually agree on a widely accepted reductive theory of causation has faded. Nevertheless, the literature on causation is ourishing, since causal notions are central to both philosophical and scienti c discourse and there is much to be said about them, even beyond attempts to provide a uni ed reductive analysis.
-
615951.85163
Here are two statements that are both very plausibly true, but which seem to be in serious tension: (1) In 1879 A. A. Michelson measured the speed of light to within 99% accuracy (2) Strictly speaking, there is no speed of light in Special relativity. The purpose of this paper will be to resolve the tension between (1) and (2). The majority of what follows will be devoted to defending the second claim, which is remarkably controversial even among working physicists and philosophers of science. I argue that this controversy is due to a confusion about the role of co-ordinate representations in characterizing different theories of space-time. Once this confusion is resolved, it becomes clear that the claim that light has a speed at all is nothing more than an artifact of our representational scheme, and not an accurate reflection of the space-time structure of relativity. Before going into all that, I will say a few things in favor of (1).
-
615971.85164
The growing interest in the concept of probability of self-location of a conscious agent created multiple controversies. Considering David Albert’s setup in which he described his worries about consistency of the concept, I identify the sources of these controversies and argue that defining “self” in an operational way provides a satisfactory meaning for the probability of self-location of an agent in a quantum world. It keeps the nontrivial feature of having subjective ignorance of self-location without ignorance about the state of the universe. It also allows defining the Born rule in the many-worlds interpretation of quantum mechanics and proving it from some natural assumptions.
-
788890.851649
In order to understand cognition, we often recruit analogies as building blocks of theories to aid us in this quest. One such attempt, originating in folklore and alchemy, is the homunculus: a miniature human who resides in the skull and performs cognition. Perhaps surprisingly, this appears indistinguishable from the implicit proposal of many neurocognitive theories, including that of the ‘cognitive map,’ which proposes a representational substrate for episodic memories and navigational capacities. In such ‘small cakes’ cases, neurocognitive representations are assumed to be meaningful and about the world, though it is wholly unclear who is reading them, how they are interpreted, and how they come to mean what they do. We analyze the ‘small cakes’ problem in neurocognitive theories (including, but not limited to, the cognitive map) and find that such an approach a) causes infinite regress in the explanatory chain, requiring a human-in-the-loop to resolve, and b) results in a computationally inert account of representation, providing neither a function nor a mechanism. We caution against a ‘small cakes’ theoretical practice across computational cognitive modelling, neuroscience, and artificial intelligence, wherein the scientist inserts their (or other humans’) cognition into models because otherwise the models neither perform as advertised, nor mean what they are purported to, without said ‘cake insertion.’ We argue that the solution is to tease apart explanandum and explanans for a given scientific investigation, with an eye towards avoiding van Rooij’s (formal) or Ryle’s (informal) infinite regresses.
-
788914.851657
A speculative exploration of the distinction between a relational formal ontology and a classical formal ontology for modelling phenomena in nature that exhibit relationally-mediated wholism, such as phenomena from quantum physics and biosemiotics. Whereas a classical formal ontology is based on mathematical objects and classes, a relational formal ontology is based on mathematical signs and categories. A relational formal ontology involves nodal networks (systems of constrained iterative processes) that are dynamically sustained through signalling. The nodal networks are hierarchically ordered and exhibit characteristics of deep learning. Clarifying the distinction between classical and relational formal ontologies may help to clarify the role of interpretative context in physics (eg. the role of the observer in quantum theory) and the role of hierarchical nodal networks in computational models of learning processes in generative AI.