1. 1877247.161033
    There are four well-known models of fundamental objective probabilistic reality: classical probability, comparative probability, non-Archimedean probability, and primitive conditional probability. I offer two desiderata for an account of fundamental objective probability, comprehensiveness and non-superfluity. It is plausible that classical probabilities lack comprehensiveness by not capturing some intuitively correct probability comparisons, such as that it is less likely that 0 = 1 than that a dart randomly thrown at a target will hit the exact center, even though both classically have probability zero. We thus want a comparison between probabilities with a higher resolution than we get from classical probabilities. Comparative and non-Archimedean probabilities have a hope of providing such a comparison, but for known reasons do not appear to satisfy our desiderata. The last approach to this problem is to employ primitive conditional probabilities, such as Popper functions, and then argue that P(0 = 1 | 0 = 1 or hit center) = < 1 = P (hit center | 0 = 1 or hit center). But now we have a technical question: How can we reconstruct a probability comparison, ideally satisfying the standard axioms of comparative probability, from a primitive conditional probability? I will prove that, given some plausible assumptions, it is impossible to perform this task: conditional probabilities just do not carry enough information to define a satisfactory comparative probability. The result is that of the models, no one satisfies our two desiderata. We end by briefly considering three paths forward.
    Found 3 weeks ago on PhilSci Archive
  2. 1877275.161097
    Weatherall and Manchak (2014) show that Reichenbachean universal effects, constrained to a rank-2 tensor field representation in the geodesic equation, always exist in non-relativistic gravity but not so for relativistic spacetimes. Thus general relativity is less susceptible to underdetermination than its Newtonian predecessor. Durr and Ben-Menahem (2022) argue these assumptions are exploitable as loopholes, effectively establishing a (rich) no-go theorem. I disambiguate between two targets of the proof, which have previously been conflated: the existence claim of at least one alternative geometry to a given one and Reichenbach’s (in)famous ”theorem theta”, which amounts to a universality claim that any geometry can function as an alternative to any other. I show there is no (rich) no-go theorem to save theorem theta. I illustrate this by explicitly breaking one of the assumptions and generalising the proof to torsionful spacetimes. Finally, I suggest a programmatic attitude: rather than undermining the proof one can use it to systematically and rigorously articulate stronger propositions to be proved, thereby systematically exploring the space of alternative spacetime theories.
    Found 3 weeks ago on PhilSci Archive
  3. 1937099.161112
    On All-False Open Futurism (AFOF), any future tensed statement about a future contingent must be false. It is false that there will be a sea battle tomorrow, for instance. Suppose now I realize that due to a bug, tomorrow I will be able to transfer ten million dollars from a client’s account to mine, and then retire to a country that won’t extradite me. …
    Found 3 weeks, 1 day ago on Alexander Pruss's Blog
  4. 1962125.161122
    Very short summary: This essay discusses the distinction between the public and the private. This distinction has a particular value in liberal societies. I argue that publicity is a requirement of social morality. …
    Found 3 weeks, 1 day ago on The Archimedean Point
  5. 1962305.16113
    We can now simulate environments containing vast numbers of agents engaging in complex interactions. Given projected advances in computing power, it is reasonable to expect that we will one day be able to create simulated agents that think and feel much as we do. One might doubt that sims will ever be able to feel, but the view that sims can be conscious has been defended by proponents of the simulation argument (Bostrom 2003; Chalmers 2022: ch. 5, ch. 15). Here, I assume that sims can be conscious . In what follows, I will use “simulations” to refer narrowly to simulations involving such agents, unless otherwise stated. Additionally, following Bostrom (2003), I will use “simulations” to refer to “ancestor simulations”—simulations of posthuman civilizations’ evolutionary histories, again unless otherwise stated. This focus is not due to the relevance of repeating details of history, but is meant rather to maintain focus on simulations whose scale and complexity resembles that of our universe.
    Found 3 weeks, 1 day ago on Nick Bostrom's site
  6. 1963638.161138
    This study examines the nature of the relationship between mathematics and physical reality from a critical epistemological perspective. It proposes the concept of "epistemological proportion" as an explanatory mechanism to understand how mathematical structures interact with scientific knowledge. Through the analysis of selected historical cases and an examination of Gödel's incompleteness theorems, this study suggests a balanced theoretical framework that re-positions mathematics as a powerful, yet fallible, cognitive tool. The study concludes by emphasizing the necessity of re-evaluating the role of mathematics in scientific inquiry, underscoring the importance of balancing mathematical elegance with empirical truth.
    Found 3 weeks, 1 day ago on PhilSci Archive
  7. 1963659.161148
    Baroque questions of set-theoretic foundations are widely assumed to be irrelevant to physics. In this article, we challenge this assumption. We show that even such fundamental questions as whether a theory is deterministic — whether it fixes a unique future given the present — depend on set-theoretic axiom candidates over which there is philosophical disagreement.
    Found 3 weeks, 1 day ago on PhilSci Archive
  8. 1963694.161156
    Charles Darwin argued that natural selection produces species analogously to how artificial selection produces breeds. Previous analyses have focused on the formal structure of Darwin’s analogical argument, but few authors have investigated how it is that Darwin’s analogy succeeds in yielding support for his theory in the first place. This topic is particularly salient since at first blush, Darwin's analogical argument appears to undermine the inference he aims to make with it. Darwin held that natural selection produces new species, but artificial selection produces only varieties—a fact which led many of Darwin’s contemporaries to see the analogy as counterevidence to his theory, rather than evidence in favor. I argue that the key to understanding how Darwin’s analogy supports his theory is to recognize three core conceptual revisions to the ‘received view’ of artificial selection for which he argued. Only on Darwin’s resultant ‘revised view’ of artificial selection did his analogical argument support, rather than undermine, his theoretical explanation for the origin of species. These revisions are: 1) the sufficiency of mere differential reproduction for producing evolutionary change; 2) the limitless variation of organisms; and 3) the age and stability of Earth’s geological history. I show why Darwin needed to establish these particular conceptual modifications in order for his analogical argument to generate theoretical support, and I further suggest that accounts focused on the formal aspects of Darwin’s analogical argument cannot capture the significance of Darwin’s conceptual revisions to the success of his analogical argument.
    Found 3 weeks, 1 day ago on PhilSci Archive
  9. 2042285.161176
    This is probably an old thing that has been discussed to death, but I only now noticed it. Suppose an open future view on which future contingents cannot have truth value. What happens to entailments? …
    Found 3 weeks, 2 days ago on Alexander Pruss's Blog
  10. 2048903.161184
    While correct as far as it goes, this standard picture can encourage an overly sharp distinction between scientific activities and ethical deliberation. Far from entering only at the policy-making stage, ethical judgments often shape scientific research itself. This is most obvious in the choice of research questions. The choice of what to study ultimately affects what knowledge can be brought to bear in real-world decisions, including consequences for which (and whose) decisions can be made with the benefit of scientific insight.
    Found 3 weeks, 2 days ago on Wendy S. Parker's site
  11. 2050047.161193
    Numerous theories of quantum gravity (QG) postulate non-spatiotemporal structures to describe physics at or beyond the Planck energy scale. This stands in stark contrast to the spatiotemporal framework provided by general relativity, which remains remarkably successful in low-energy regimes. The resulting tension gives rise to the so-called disappearance of spacetime (DST): the removal of spatiotemporal structures from the fundamental ontology of a theory and the corresponding challenge of reconciling this with the general relativistic picture. In this paper, I classify different instances of DST and highlight the necessary trade-off between theory-specific features and general patterns across QG approaches. I argue that a precise formulation of the DST requires prior clarification of the relevant conception of fundamentality. In particular, I distinguish two forms of disappearance, corrisponding to intra-theoretic and inter-theoretic fundamentality relations. I argue that intra-theoretic analyses can yield meaningful results into the DST in QG only when supported by further justificatory arguments. To substantiate my claim, I examine the relationship between string theory, noncommutative geometry, and special relativity.
    Found 3 weeks, 2 days ago on PhilSci Archive
  12. 2050071.161201
    According to the Causal Principle, anything that begins to exist has a cause. In turn, various authors – including Thomas Hobbes, Jonathan Edwards, and Arthur Prior – have defended the thesis that, had the Causal Principle been false, there would be no good explanation for why entities do not begin at arbitrary times, in arbitrary spatial locations, in arbitrary number, or of arbitrary kind. I call this the Hobbes-Edwards- Prior Principle (HEPP). However, according to a view popular among both philosophers of physics and naturalistic metaphysicians – Neo-Russellianism – causation is absent from fundamental physics. I argue that objections based on the HEPP should have no dialectical force for Neo-Russellians. While Neo-Russellians maintain that there is no causation in fundamental physics, they also have good reason to reject the HEPP.
    Found 3 weeks, 2 days ago on PhilSci Archive
  13. 2050102.161209
    The Born-Oppenheimer Approximation (BOA) is a widely used strategy in quantum chemistry due to its efficacy in its validity scope. Originally proposed by Max Born and J. Robert Oppenheimer (1927) to calculate molecular energy levels, the BOA is now formulated in a substantially different way, and is useful to explain other molecular properties. Recently, Nick Huggett, James Ladyman, and Karim Thébault (2024) published an extensive article (hereinafter referred to as HLT) discussing the BOA. Their primary aim is to express a strong disagreement with our position about the matter, according to which the BOA includes a classical assumption that is incompatible with the Heisenberg Principle. The authors, by contrast, argue that the BOA requires no classical assumption, suggesting that the reduction of chemistry to physics is thereby ensured.
    Found 3 weeks, 2 days ago on PhilSci Archive
  14. 2122472.161217
    Here is a very plausible pair of claims: The Son could have become incarnate as a different human being. God foreknew many centuries ahead of time which human being the Son would become incarnate as. Regarding 1, of course, the Son could not have been a different person—the person the Son is and was and ever shall be is the second person of the Trinity. …
    Found 3 weeks, 3 days ago on Alexander Pruss's Blog
  15. 2131903.161225
    With Matthew Adelstein’s kind permission, here’s the transcript of the Adelstein/Huemer conversation on the ethics of insect suffering. Lightly edited by me. 00:37:48 MATTHEW ADELSTEIN Okay. So, yeah. …
    Found 3 weeks, 3 days ago on Bet On It
  16. 2136446.161234
    The Pusey-Barrett-Rudolph (PBR) theorem proves that the joint wave function ψ ⊗ψ2 of a composite quantum system is ψ-ontic, representing the system’s physical reality. We present a minimalist proof showing that this result, combined with the tensor product structure assigning ψ1 to subsystem 1 and ψ2 to subsystem 2, directly implies that ψ1 and ψ2 are ψ-ontic for their respective subsystems. This establishes ψ-ontology for single quantum systems without requiring preparation independence or other assumptions. Our proof challenges the widely held view that joint ψ-onticity permits subsystem ψ-epistemicity via correlations, providing a simpler, more direct understanding of the wave function’s ontological status in quantum mechanics.
    Found 3 weeks, 3 days ago on PhilSci Archive
  17. 2136468.161241
    Predictive processing is an ambitious neurocomputational framework, offering an unified explanation of all cognitive processes in terms of a single computational operation, namely prediction error minimization. Whilst this ambitious unificatory claim has been thoroughly analyzed, less attention has been paid to what predictive processing entails for structure-function mappings in cognitive neuroscience. We argue that, taken at face value, predictive processing entails an all-to-one structure-function mapping, wherein each individual neural structure is assigned the same function, namely minimizing prediction error. Such a structure-function mapping, we show, is highly problematic. For, barring few, rare occasions, such a structure-function mapping fails to play the predictive, explanatory and heuristic roles structure-function mappings are expected to play in cognitive neuroscience. Worse still, it offers a picture of the brain that we know is wrong. For, it depicts the brain as an equipotential organ; an organ wherein structural differences do not correspond to any appreciable functional difference, and wherein each component can substitute for any other component without causing any loss or degradation of functionality. Somewhat ironically, the very neuroscientific roots of predictive processing motivate a form of skepticism concerning the framework’s most ambitious unificatory claims. Do these problems force us to abandon predictive processing? Not necessarily. For, once the assumption that all cognition can be accounted for exclusively in terms of prediction error minimization is relaxed, the problems we diagnosed lose their bite.
    Found 3 weeks, 3 days ago on PhilSci Archive
  18. 2136492.161252
    A conditional argument is put forth suggesting that if qualia have a functional role in intelligence, then it might be possible, by observing the behavior of verbal AI systems like large language models (LLMs) or other architectures capable of verbal reasoning, to tackle in an empirical way the “strong AI” problem, namely, the possibility that AI systems have subjective experiences, or qualia. The basic premise is that if qualia are functional, and thus have causal roles, then they could affect the production of discourses about qualia and subjective consciousness in general. A thought experiment is put forth envisioning a possible method to probabilistically test the presence of qualia in AI systems based on this conditional argument. The method proposed in the thought experiment focuses on observing whether ideas related to the issue of phenomenal consciousness, such as the so-called “hard problem” of consciousness, or related philosophical issues centered on qualia, spontaneously emerge in extended dialogues involving LLMs specifically trained to be initially oblivious of such philosophical concept and related ones. By observing the emergence (or lack thereof) in the AI’s verbal production of discussions related to phenomenal consciousness in these contexts, the method seeks to provide empirical evidence for or against the existence of consciousness in AI. An outline of a Bayesian test of the hypothesis is provided. Three main investigative methods with different reliability and feasibility aimed at empirically detecting AI consciousness are proposed: one involving human interaction and two fully automated, consisting in multi-agent conversations between machines. The practical and philosophical challenges involved by the idea of transforming the proposed thought experiments into an actual empirical trial are then discussed. In light of these considerations, the proposal put forth in the paper appears to be at least a contribution to computational philosophy in the form of philosophical thought experiments focused on computational systems, aimed at refining our philosophical understanding of consciousness. Hopefully, it could also provide hints toward future empirical investigations into machine consciousness.
    Found 3 weeks, 3 days ago on PhilSci Archive
  19. 2202681.161261
    Van Inwagen famously raised the Special Composition Question (SCQ): What is an informative criterion for when a proper plurality of objects composes a whole. There is, however, the Reverse Special Composition Question (RSCQ): What is an informative criterion for when an object is composed of a proper plurality? …
    Found 3 weeks, 4 days ago on Alexander Pruss's Blog
  20. 2205780.161271
    I’ve been working on a math project involving the periodic table of elements and the Kepler problem—that is, the problem of a particle moving in an inverse square force law. I started it in 2021, but I just finished. …
    Found 3 weeks, 4 days ago on Azimuth
  21. 2211871.16128
    In How Intention Matters, I lamented the common myth that concern for people’s intentions and quality of will was inherently “Kantian” or otherwise non-consequentialist. Today we do the same for autonomy. …
    Found 3 weeks, 4 days ago on Good Thoughts
  22. 2221241.161288
    Most philosophical discussions of natural kinds concern entities in the category of substance: particles, chemical substances, organisms, etc. But I think we shouldn’t forget that there is good reason to posit natural kinds of entities in other categories. …
    Found 3 weeks, 4 days ago on Alexander Pruss's Blog
  23. 2222678.161296
    In the philosophy of religion, ‘de jure objections’ is an umbrella term that covers a wide variety of arguments for the conclusion that theistic belief is rationally impermissible, whether or not God exists. What we call ‘modal Calvinism’ counters these objections by proposing that ‘if God exists, God would ensure that theistic belief is rationally compelling on a global scale’, a modal conditional that is compatible with atheism. We respond to this modal Calvinist argument by examining it through the lenses of probability, modality, and logic – particularly, we apply analytical tools such as possible world semantics, Bayesian reasoning, and paraconsistent models. After examining various forms of the argument, we argue that none can compel atheists to believe that serious theistic possibilities worth considering would involve the purported divine measure.
    Found 3 weeks, 4 days ago on Shawn Standefer's site
  24. 2222768.161308
    Determining appropriate mechanisms for transferring and translating research into policy has become a major concern for researchers (knowledge producers) and policymakers (knowledge users) worldwide. This has led to the emergence of a new function of brokering between researchers and policymakers, and a new type of agent called Knowledge Broker. Understanding these complex multi-agent interactions is critical for an efficient knowledge brokering practice during any given policymaking process. Here we present 1) the current diversity of knowledge broker groups working in the field of biosecurity and environmental management; 2) the incentives linking the different agents involved in the process (knowledge producers, knowledge brokers and knowledge users), and 3) the gaps, needs and challenges to better understand this social ecosystem. We also propose alternatives aimed at improving transparency and efficiency, including future scenarios where the role of artificial intelligence (AI) technologies may become predominant in knowledge-brokering activities.
    Found 3 weeks, 4 days ago on PhilSci Archive
  25. 2222826.161319
    Diagnosing patients with disorders of consciousness involves inductive risk: the risk of false negative and false positive results when gathering and interpreting evidence of consciousness. A recent proposal suggests mitigating that risk by incorporating patient values into methodological choices at the level of individual diagnostic techniques: when using machine-learning algorithms to detect neural evidence of responsiveness to commands, clinicians should consider the patient’s own preferences about whether avoiding false positives or false negatives takes priority (Birch, 2023). In this paper, I argue that this proposal raises concerns about how to ensure that inevitable non-epistemic value judgments do not outweigh epistemic considerations. Additionally, it comes with challenges related to the predictive accuracy of surrogate decision-makers and the decisional burden imposed on them. Hence, I argue that patient values should not be incorporated at the level of gathering evidence of consciousness, but that they should play the leading role when considering how to respond to that evidence.
    Found 3 weeks, 4 days ago on PhilSci Archive
  26. 2286598.161329
    We establish the equivalence of two much debated impartiality criteria for social welfare orders: Anonymity and Permutation Invariance. Informally, Anonymity says that, in order to determine whether one social welfare distribution w is at least as good as another distribution v, it suffices to know, for every welfare level, how many people have that welfare level according to w and how many people have that welfare level according to v. Permutation Invariance, by contrast, says that, to determine whether w is at least as good as v, it suffices to know, for every pair of welfare levels, how many people have that pair of welfare levels in w and v respectively.
    Found 3 weeks, 5 days ago on Jeremy Goodman's site
  27. 2309228.161337
    Recent approaches in quantum gravity suggest that spacetime may not be a fundamental aspect of reality, but rather an emergent phenomenon arising from a more fundamental substratum. This raises a significant challenge for traditional accounts of laws of nature, which are typically grounded in spatiotemporal concepts. This paper discusses two non- Humean strategies for formulating laws of nature in the absence of spacetime: the ’non-temporal evolution’ approach and the ’global constraints’ approach. The argument begins by showing that the latter permits a more naturalistic stance than the former. A tentative defence is then provided against the objection that laws as global constraints are too thin to provide genuine metaphysical intelligibility and explanatory power.
    Found 3 weeks, 5 days ago on PhilSci Archive
  28. 2376200.161345
    W.D. Hamilton in 1975 wrote a book chapter that constitutes his most extensive comments on human cooperation. In it he flagged the “tribal facies of social behavior” as the problem to be solved. He was well aware of the difficulty of extending his theory of inclusive fitness to the tribal scale. He mentions the idea that cultural processes might be responsible but expresses skepticism that culture could act against genetic fitness imperatives and sought genetic answers to the puzzle. We have explored the potential of culture to generate the stable variation necessary for selection at the level of tribes and other large human groups. We have modeled three forms of cultural group selection, and reviewed the ample empirical evidence that all three forms are important in humans. The reward and punishment systems in human societies can also create social selection on genes underlying human behavior. One of the critical factors in cultural evolution is that it can be faster than genetic evolution. Here we provide a simple model that illustrates why this is important to the evolution of the tribal facies.
    Found 3 weeks, 6 days ago on Rob Boyd's site
  29. 2394170.161354
    Buckminsterfullerene is a molecule shaped like a soccer ball, made of 60 carbon atoms. If one of the bonds between two hexagons rotates, we get a weird mutant version of this molecule: This is an example of a Stone-Wales transformation: a 90° rotation in a so-called ‘π bond’ between carbon atoms. …
    Found 3 weeks, 6 days ago on Azimuth
  30. 2394170.161362
    I confess that, when I allow myself to think about it, I am amazed that I understand so little about what it is we philosophers do. I believe I can distinguish good philosophical work from bad—I can recognize when philosophy is done well—but I do not have a clear understanding of what it is that I am recognizing, and when I try actually to say what our discipline does, my remarks turn out to be naive and crude, more like the groping efforts of a beginning student than like the contributions of an advanced scholar to the field. …
    Found 3 weeks, 6 days ago on Under the Net