-
7486.557575
I examine Howson’s alluring suggestion that Bayesianism, by supplying a logic of inductive inference—conditionalisation—solves the problem of induction. I draw on his historical heritage, especially Hume, Peirce, and Ramsey, to reconstruct the interpretation of the problem of induction that his remarks intimate. Roughly, it is that of how to amend the system with which one meets the world, in the light of new particulars. Unfortunately, his claim that conditionalisation constitutes a solution to this problem, I argue, fails by his own lights, because it turns on the widely endorsed but nonetheless erroneous contention that a justification of conditionalisation qua rule of inference can be given independently from a justification of the priors.
-
7510.557634
Bell’s inequality is derived from three assumptions: measurement independence, outcome independence, and parameter independence. Among these, measurement independence, often taken for granted, holds that hidden variables are statistically uncorrelated with measurement settings. Under this assumption, the violation of Bell’s inequality implies that either outcome independence or parameter independence fails to hold, meaning that local hidden variables do not exist. In this paper, we refer to this interpretive stance as the nonfactorizable position. In contrast, superdeterminism represents the view that measurement independence does not hold. Despite its foundational role, this assumption has received relatively little philosophical scrutiny. This paper offers a philosophical reassessment of measurement independence through three major frameworks in the philosophy of science: de Regt’s contextual theory of scientific understanding, Kuhn’s criteria for theory choice, and Lakatos’s methodology of scientific research programmes. Using these lenses, we evaluate the two major responses to the violation of Bell’s inequality, the nonfactorizable position and superdeterminism, and argue that the nonfactorizable position currently fares better across all three criteria. Beyond this binary, we introduce a spectrum of intermediate positions that allow for partial violations of measurement independence, modeled via mutual information. These positions modify the “positive heuristic” of superdeterminism, a crucial component in Lakatos’s definition of research programmes, offering avenues for progressive research. This analysis reframes the debate surrounding Bell’s inequality and illustrates how methodological tools can effectively guide theory evaluation in physics.
-
91253.557643
This paper addresses the accelerating crisis of ethical governance in an age of complex socio-technical change, particularly in the domain of Artificial Intelligence. It poses a foundational philosophical question: when, if ever, is AI assistance in ethical deliberation legitimate? An answer is developed through three theses: i) the Ethical No-Free-Lunch (ENFL) principle, which establishes the indispensability of human normative intervention and accountability; ii) the Discovery/Justification Separation inspired by Reichenbach’s work, which restricts AI use to the exploratory “context of discovery”; iii) the Algorithmic Mediated Control Framework (AMCF), which mandates that only scrutable, human-vetted deterministic algorithms generated with AI assistance, and not the AI itself, be entrusted with critical societal processes. From these theses, five legitimacy criteria for AI-assisted ethical deliberation are derived. Finally, the paper proposes the “AI-assisted Iterative Method for Ethical Deliberation” (AIMED), an actionable multi-stage workflow that fulfills the exposed criteria for ethical AI-assisted deliberation. This method integrates digital literature analysis, structured human–AI dialogue, human-only verification, and continuous feedback. The paper explicitly addresses several potential objections. It is shown how the AIMED framework aligns with and provides a concrete implementation for major international regulatory guidelines, such as the EU AI Act and the NIST AI Risk Management Framework. By situating the AIMED within traditions of proceduralism, the governance of inductive risk, and human–AI collaboration, the paper argues that this framework offers a philosophically justified, practically implementable model of AI-assisted ethical governance, that can be seen as an actionable instance of Digital Humanism.
-
91279.55765
The current landscape of views on chance in the Everett interpretation is rocky. Everettians (Wallace 2012, Sebens and Carroll 2018, McQueen and Vaidman 2019) agree that chance should be derived using principles governing uncertain or partial belief, but they cannot agree on how. Critics (Baker 2007, Dawid and Thébault 2015, Mandolesi 2019) maintain that any such approach is circular. We smooth the landscape by shifting focus from what Everettians take to be uncertain to what they should think is certain: namely, the conditions under which branches are isolated. Our approach to isolation resolves the main tensions among the different Everettian chance derivations while clarifying how they avoid circularity.
-
91356.557656
We investigate the epistemic role of coherence in scientific reasoning, focusing on its use as a heuristic for filtering evidence. Using a novel computational model based on Bayesian networks, we simulate agents who update their beliefs under varying levels of noise and bias. Some agents treat reductions in coherence as higher-order evidence and interpret such drops as signals that something has gone epistemically awry, even when the source of error is unclear. Our results show that this strategy can improve belief accuracy in noisy environments but tends to mislead when evidence is systematically biased. We explore the implications for the rationality of coherence-based reasoning in science.
-
107480.557662
This paper discusses the problem of Hell, defending the Aquinas-Anselm-Edwards response that any immoral act deserves eternal punishment because it offends against God. I argue that the response is more defensible than one might at first think, but nevertheless faces a serious objection. If we differentiate two different problems of Hell—the logical problem and the evidential problem—we see that, in light of this objection, the Aquinas-Anselm-Edwards response only solves the logical problem of Hell.
-
115396.557668
The Principle of Proportionate Causality (PPC) defended by Aquinas and other scholastics says that a perfection P can only be caused by something that has P either formally or eminently. To have P formally is to have P. Roughly, to have P eminently is to have a perfection greater than P.
(Some add: “has P virtually” to the list of options. …
-
261964.557674
Gijsbers (2025) has recently proposed an original theory of ‘presentist velocities’: the instantaneous relative positions and relative velocities of all bodies at the present instant are metaphysically fundamental, and their positions and velocities at both past and future times metaphysically depend on them. If physics is deterministic, then present such facts fully determine future such facts; if physics is indeterministic, then some past and future facts are indeterminate. For simplicity, I will focus on the deterministic case. e theory of presentist velocities (henceforth: TPV) solves some pernicious problems faced by other theories of velocity, such as the at-at theory (present velocities supervene on positions at different times). But Gijsbers’ presentation only considers classical mechanics, and does so in a relatively non-technical manner. If TPV is to succeed, it should also work for more realistic physical theories. e aim of this letter is to show that TPV falls short in this respect: once we look at the details of classical, statistical and relativistic mechanics, presentist velocities face serious obstacles.
-
288322.557679
A correspondent asked me how a simple God can choose. I've thought much about this, never quite happy with what I have to say. I am still not happy (nor is it surprising if "how God functions" is beyond us!) …
-
437689.557685
This paper reconstructs the derivations underlying the kinematical part of Einstein’s 1905 special relativity paper, emphasizing their operational clarity and minimalist use of mathematics. Einstein employed modest tools—algebraic manipulations, Taylor expansions, partial differentials, and functional arguments—yet his method was guided by principles of linearity, symmetry, and invariance rather than the elaborate frameworks of electron theory. The published text in Annalen der Physik concealed much of the algebraic scaffolding, presenting instead a streamlined sequence of essential equations. Far from reflecting a lack of sophistication, this economy of means was a deliberate rhetorical and philosophical choice: to demonstrate that relativity arises from two simple postulates and basic operational definitions, not from the complexities of electron theory. The reconstruction highlights how Einstein’s strategy subordinated mathematics to principle, advancing a new mode of reasoning in which physical insight, rather than computational elaboration, held decisive authority. In this respect, I show that Einstein’s presentation diverges sharply from Poincare’s. This paper is in memory of John Stachel, whose life’s work was devoted to illuminating Einstein’s special and general relativity.
-
437714.557691
Jean-Marc Ginoux’s recent book, Poincare, Einstein and the Discovery of Special Relativity: An End to the Controversy (2024), seeks to close the debate over the respective roles of Poincare and Einstein. Yet what is presented as an “end” may instead invite a more careful analysis of how similar equations can conceal divergent conceptions. The aim here is not to rehearse priority disputes but to show how Einstein’s ether-free, principle-based kinematics marked out a path that, unlike its contemporaries, became the canonical form of special relativity. To this end, I reconstruct side by side the 1905 derivations of Poincare and Einstein, tracing their similarities and, more importantly, their differences. This paper reconstructs, in a novel way, the 1905 derivations of Einstein and Poincare, highlighting their contrasting paths.
-
437737.557696
Mature scientific hypotheses are confirmed by large amounts of independent evidence. How could anyone be an anti-realist under these conditions? A classic response appeals to confirmational holism and underdetermination, but it is unclear whether traditional arguments succeed. I offer a new line of argument: If holism is interpreted as saying that the confirmation of every part of a hypothesis depends on the confirmation of the whole hypothesis, we must formulate conditions under which the confirmation received by the whole can be transferred to its parts. However, underdetermination suggests that relevant conditions are typically not met. If this is true, the confirmation received by the whole remains bounded by the priors for the parts, and we lack compelling reasons to believe substantive hypotheses based on evidence beyond the degree to which the posits involved in them are antecedently believed. A rejoinder comes from selective realism: If some posit is preserved throughout theory change, it is confirmed beyond the degree to which the containing hypothesis is. However, the variant of holism considered here exactly implies that we cannot confirm such posits in isolation. As I will show, the realist is thus forced into a dilemma: Either she succumbs to the holistic challenge, or she must embrace meta-empirical facts, such as the posit’s recurrence, as confirmatory.
-
521670.557701
This paper investigates histories in Branching Space-Time (BST) structures. We start by identifying necessary and sufficient conditions for the existence of free histories, and then we turn to the intangibility problem, and we show that the existence of histories in BST structures is equivalent to the axiom of choice, yielding the punchline “history gives us choice”.
-
521775.557707
We present a conceptual framework in which quantum probabilities arise from discrete events generated by real-valued alignments of inner products between two dynamically evolving wavefunctions. In this perspective, discreteness and probabilistic behavior emerge from the temporal structure of such events rather than being imposed axiomatically. Illustrative calculations show that the Born rule can appear as the limiting frequency of these events, without invoking wavefunction collapse, many-worlds branching, or decision-theoretic postulates. A two-state example demonstrates consistency with standard quantum predictions and suggests how outcome frequencies track Born weights. Extensions to interference scenarios, quantization heuristics, and multidimensional systems indicate that this proposal provides a fresh conceptual angle on the origin of quantum probabilities. This work is exploratory and aims to highlight the underlying idea rather than provide a completed alternative theory; questions concerning dynamical equations, general proofs, and experimental signatures remain open for future research.
-
610407.557716
With the classical distinction between context of discovery and context of justification considered by many to have been overcome, heuristics (understood in a broad sense) has increasingly rekindled the interest of philosophers of science. Building on this trend, a heuristic approach to the Voigt transformation (based on Rescher's Aporetics) is first presented - an issue on which there seem to be no precedents in the literature. Second, the value of this approach is defended from a philosophical (and, indirectly, pedagogical) viewpoint. By using this approach, several conceptual links in the theory of space-time can be highlighted (links which go unnoticed in classical hypothetical-deductive methods leading to the Voigt transformation). In particular an interesting connection with the Lorentz transformation becomes apparent.
-
630847.557723
William Kretschmer, Sabee Grewal, Matthew DeCross, Justin A. Gerber, Kevin Gilmore, Dan Gresh, Nicholas Hunter-Jones, Karl Mayer, Brian Neyenhuis, David Hayes, Scott Aaronson
A longstanding goal in quantum information science is to demonstrate quantum computations that cannot be feasibly reproduced on a classical computer. Such demonstrations mark major milestones: they showcase fine control over quantum systems and are prerequisites for useful quantum computation. To date, quantum advantage has been demonstrated, for example, through violations of Bell inequalities and sampling-based quantum supremacy experiments. However, both forms of advantage come with important caveats: Bell tests are not computationally difficult tasks, and the classical hardness of sampling experiments relies on unproven complexity-theoretic assumptions.
-
637099.557728
How, exactly, can category theory help modeling in public health? I wrote a paper about this with two people who helped run Canada’s COVID modeling, together with a software engineer and a mathematician at the Topos Institute:
• John Baez, Xiaoyan Li, Sophie Libkind, Nathaniel D. Osgood and Eric Redekopp, A categorical framework for modeling with stock and flow diagrams, in Mathematics of Public Health: Mathematical Modelling from the Next Generation, eds. …
-
712087.557734
In this paper, we propose a novel algorithm for epistemic planning based on dynamic epistemic logic (DEL). The novelty is that we limit the depth of reasoning of the planning agent to an upper bound b, meaning that the planning agent can only reason about higher-order knowledge to at most (modal) depth b. The algorithm makes use of a novel type of canonical b-bisimulation contraction guaranteeing unique minimal models with respect to b-bisimulation. We show our depth-bounded planning algorithm to be sound. Additionally, we show it to be complete with respect to planning tasks having a solution within bound b of reasoning depth (and hence the iterative bound-deepening variant is complete in the standard sense). For bound b of reasoning depth, the algorithm is shown to be (b + 1)-E XPTIME complete, and furthermore fixed-parameter tractable in the number of agents and atoms. We present both a tree search and a graph search variant of the algorithm, and we benchmark an implementation of the tree search version against a baseline epistemic planner.
-
722051.557739
Are English future auxiliaries (like will and be going to) modals in some semantically interesting sense? Are they the semantic brethren of might and should, or are they more similar to the past and past tenses? According to the non-modal view of future auxiliaries, such expressions merely serve to shift the time of evaluation forward, just as the past tense shifts the time of evaluation backward. Perhaps the most familiar modal analysis of future operators is the Peircean theory discussed by Prior (1967). One version of this theory says that fut ϕ is true just in case ϕ is true at all future possibilities (more carefully: fut ϕ is true at a world w and time t just in case every future possibility w for w at t is such that there is a time t later than t such that ϕ is true at w and t ). But what is a future possibility? A schematic answer: given a possible world w and a time t, we say that w is a future possibility for w at t iff w is sufficiently similar to w up until and including t (so w and w may differ significantly thereafter).
-
873336.557745
This paper explores whether people are more likely to recognize inconsistency in others’ judgments than in their own, and if so, why. It reports two pre-registered online experiments with samples representative of the UK population (N = 814 and N = 1,623). In Study 1, people are more likely to recognize inconsistency in others’ moral (and non-moral) judgments than in their own. Study 2 replicates this finding and tests three explanations: (i) motivated reasoning, (ii) selective cognitive effort, and (iii) limited insight into others’ reasoning. Ad (i), because people’s susceptibility to motivated reasoning is said to diminish when people must account for their judgments, the presence of motivated reasoning was examined by manipulating social accountability. No effect was found. Ad (ii), while people spent significantly more time (a proxy for cognitive effort) on reviewing others’ consistency than their own, this explained only a fraction of the greater rate at which inconsistencies in others’ reasoning were recognized. Ad (iii), using low confidence in consistency evaluations as a proxy for limited insight, the study did not find support for the limited insight hypothesis. The finding that people are better at recognizing inconsistency in others’ moral judgments aligns with the idea that moral consistency reasoning is a social device that operates best when we interact with others, but more research is needed to uncover the psychological mechanisms behind this effect.
-
880648.55775
Warning: I worry there may be something wrong in the reasoning below. Causal Decision Theory (CDT) and Epistemic Decision Theory (EDT) tend to disagree when the payoff of an option statistically depends on your propensity to go for that option. …
-
1152168.557755
Consider a non-relativistic quantum particle with wave function inside a region ⊂ R , and suppose that detectors are placed along the boundary∂. The question how to compute the probability distribution of the time at which the detector surface registers the particle boils down to finding a reasonable mathematical definition of an ideal detecting surface; a particularly convincing definition, called the absorbing boundary rule, involves a time evolution for the particle’s wave function ψ expressed by a Schrödinger equation in together with an “absorbing” boundary condition on∂ first considered by Werner in 1987, viz., ∂ψ/∂n = iκψ with κ > 0 and ∂/∂n the normal derivative. We provide here a discussion of the rigorous mathematical foundation of this rule. First, for the viability of the rule it plays a crucial role that these two equations together uniquely define the time evolution of ψ; we point out here how, under some technical assumptions on the regularity (i.e., smoothness) of the detecting surface, the Lumer-Phillips theorem implies that the time evolution is well defined and given by a contraction semigroup. Second, we show that the collapse required for the N-particle version of the problem is well defined. We also prove that the joint distribution of the detection times and places, according to the absorbing boundary rule, is governed by a positive-operator-valued measure.
-
1216386.557761
Democratic theorists and social epistemologists often celebrate the epistemic benefits of diversity. One of the cornerstones is the ‘diversity trumps ability’ result by Hong and Page (2004). Ironically, the interplay between diversity and ability is rarely studied in radically different frameworks. Although diversity has been studied in prediction and search problems, the diversity-expertise tradeoff has not been studied systematically for small, deliberative groups facing binary classification problems. To fill this gap, I will introduce a new evidential sources framework and study whether, when, and (if so) why diversity trumps expertise in binary classification problems.
-
1402120.557767
We apply recent ideas about complexity and randomness to the philosophy of laws and chances. We develop two ways to use algorithmic randomness to characterize probabilistic laws of nature. The first, a generative chance law, employs a nonstandard notion of chance. The second, a probabilistic constraining law, impose relative frequency and randomness constraints that every physically possible world must satisfy. The constraining notion removes a major obstacle to a unified governing account of non-Humean laws, on which laws govern by constraining physical possibilities; it also provides independently motivated solutions to familiar problems for the Humean best-system account (the Big Bad Bug and the zero-fit problem). On either approach, probabilistic laws are tied more tightly to corresponding sets of possible worlds: some histories permitted by traditional probabilistic laws are now ruled out as physically impossible. Consequently, the framework avoids one variety of empirical underdetermination while bringing to light others that are typically overlooked.
-
1737040.557773
[An excerpt from Beyond Right and Wrong.] Some rights can be expected to promote overall well-being. Utilitarianism endorses these. Other rights lack this utilitarian property: they protect people against harmful interventions, but at greater cost to others who miss out on helpful interventions as a result. …
-
1748052.557779
I discuss the nature of the puzzle about the time-asymmetry of radiation and argue that its most common formulation is flawed. As a result, many proposed solutions fail to solve the real problem. I discuss a recent proposal of Mathias Frisch as an example of the tendency to address the wrong problem. I go on to suggest that the asymmetry of radiation, like the asymmetry of thermodynamics, results from the initial state of the universe. 1. Introduction. There is a puzzle about radiation. In our experience, waves display a clear time-asymmetry. Waves appear to spread outwards after their sources move; they do not converge on sources which then begin to move. Water waves diverge after a pebble is dropped in a pond; they do not travel inwards to a spot from which a pebble is then ejected. We see electromagnetic waves emerge after charges accelerate, not converge on charges which then begin to accelerate. Yet the equations governing wave phenomena are symmetric in time, allowing for both the kinds of waves we see and the time-reversal of these processes. Then where does the observed asymmetry of radiation come from?
-
1831354.557784
What do large language models actually model? Do they tell us something about human capacities, or are they models of the corpus we’ve trained them on? I give a non-deflationary defence of the latter position. Cognitive science tells us that linguistic capabilities in humans rely supralinear formats for computation. The transformer architecture, by contrast, supports at best a linear formats for processing. This argument will rely primarily on certain invariants of the computational architecture of transformers. I then suggest a positive story about what transformers are doing, focusing on Liu et al. (2022)’s intriguing speculations about shortcut automata. I conclude with why I don’t think this is a terribly deflationary story. Language is not (just) a means for expressing inner state but also a kind of ‘discourse machine’ that lets us make new language given appropriate context. We have learned to use this technology in one way; LLMs have also learned to use it too, but via very different means.
-
1917273.55779
Suppose there are two opaque boxes, A and B, of which I can choose one. A nearly perfect predictor of my actions put $100 in the box that they thought I would choose. Suppose I find myself with evidence that it’s 75% likely that I will choose box A (maybe in 75% of cases like this, people like me choose A). …
-
1930020.557795
In 2015, Amy Finkelstein, Nathaniel Hendren, and Erzo Luttmer released an NBER working paper called “The Value of Medicaid: Interpreting Results from the Oregon Health Insurance Experiment.” The paper’s results were a slap in the face of Social Desirability Bias — and the authors boldly advertised them right in the abstract:
Our baseline estimates of Medicaid's welfare benefit to recipients per dollar of government spending range from about $0.2 to $0.4, depending on the framework, with at least two-fifths – and as much as four-fifths – of the value of Medicaid coming from a transfer component, as opposed to its ability to move resources across states of the world. …
-
1934016.557801
Interactions between agents are supported through a continuous process of detecting and responding to behaviors that are contingent upon the other agent’s behavior. Here, we explore the temporal dependence of these mechanisms, focusing on the role of timescale compatibility in inter-agent interactions. Using continuous-time recurrent neural networks (CTRNNs) to control embodied agents in a minimal social interaction task, we demonstrate that effective interactions require agents to operate on compatible timescales. Our results indicate that time scale mismatches disrupt agents’ ability to distinguish other agents from non-social entities, revealing a timescale threshold beyond which agents begin mis-classifying slower agents as static objects and faster agents as non-social animate objects.