-
5217690.625543
We can now simulate environments containing vast numbers of agents engaging in complex interactions. Given projected advances in computing power, it is reasonable to expect that we will one day be able to create simulated agents that think and feel much as we do. One might doubt that sims will ever be able to feel, but the view that sims can be conscious has been defended by proponents of the simulation argument (Bostrom 2003; Chalmers 2022: ch. 5, ch. 15). Here, I assume that sims can be conscious . In what follows, I will use “simulations” to refer narrowly to simulations involving such agents, unless otherwise stated. Additionally, following Bostrom (2003), I will use “simulations” to refer to “ancestor simulations”—simulations of posthuman civilizations’ evolutionary histories, again unless otherwise stated. This focus is not due to the relevance of repeating details of history, but is meant rather to maintain focus on simulations whose scale and complexity resembles that of our universe.
-
5219023.625645
This study examines the nature of the relationship between mathematics and physical reality from a critical epistemological perspective. It proposes the concept of "epistemological proportion" as an explanatory mechanism to understand how mathematical structures interact with scientific knowledge. Through the analysis of selected historical cases and an examination of Gödel's incompleteness theorems, this study suggests a balanced theoretical framework that re-positions mathematics as a powerful, yet fallible, cognitive tool. The study concludes by emphasizing the necessity of re-evaluating the role of mathematics in scientific inquiry, underscoring the importance of balancing mathematical elegance with empirical truth.
-
5219044.625699
Baroque questions of set-theoretic foundations are widely assumed to be irrelevant to physics. In this article, we challenge this assumption. We show that even such fundamental questions as whether a theory is deterministic — whether it fixes a unique future given the present — depend on set-theoretic axiom candidates over which there is philosophical disagreement.
-
5219079.625732
Charles Darwin argued that natural selection produces species analogously to how artificial selection produces breeds. Previous analyses have focused on the formal structure of Darwin’s analogical argument, but few authors have investigated how it is that Darwin’s analogy succeeds in yielding support for his theory in the first place. This topic is particularly salient since at first blush, Darwin's analogical argument appears to undermine the inference he aims to make with it. Darwin held that natural selection produces new species, but artificial selection produces only varieties—a fact which led many of Darwin’s contemporaries to see the analogy as counterevidence to his theory, rather than evidence in favor. I argue that the key to understanding how Darwin’s analogy supports his theory is to recognize three core conceptual revisions to the ‘received view’ of artificial selection for which he argued. Only on Darwin’s resultant ‘revised view’ of artificial selection did his analogical argument support, rather than undermine, his theoretical explanation for the origin of species. These revisions are: 1) the sufficiency of mere differential reproduction for producing evolutionary change; 2) the limitless variation of organisms; and 3) the age and stability of Earth’s geological history. I show why Darwin needed to establish these particular conceptual modifications in order for his analogical argument to generate theoretical support, and I further suggest that accounts focused on the formal aspects of Darwin’s analogical argument cannot capture the significance of Darwin’s conceptual revisions to the success of his analogical argument.
-
5297670.625768
This is probably an old thing that has been discussed to death, but I only now noticed it. Suppose an open future view on which future contingents cannot have truth value. What happens to entailments? …
-
5304288.625799
While correct as far as it goes, this standard picture can encourage an overly sharp distinction between scientific activities and ethical deliberation. Far from entering only at the policy-making stage, ethical judgments often shape scientific research itself. This is most obvious in the choice of research questions. The choice of what to study ultimately affects what knowledge can be brought to bear in real-world decisions, including consequences for which (and whose) decisions can be made with the benefit of scientific insight.
-
5305432.625831
Numerous theories of quantum gravity (QG) postulate non-spatiotemporal structures to describe physics at or beyond the Planck energy scale. This stands in stark contrast to the spatiotemporal framework provided by general relativity, which remains remarkably successful in low-energy regimes. The resulting tension gives rise to the so-called disappearance of spacetime (DST): the removal of spatiotemporal structures from the fundamental ontology of a theory and the corresponding challenge of reconciling this with the general relativistic picture. In this paper, I classify different instances of DST and highlight the necessary trade-off between theory-specific features and general patterns across QG approaches. I argue that a precise formulation of the DST requires prior clarification of the relevant conception of fundamentality. In particular, I distinguish two forms of disappearance, corrisponding to intra-theoretic and inter-theoretic fundamentality relations. I argue that intra-theoretic analyses can yield meaningful results into the DST in QG only when supported by further justificatory arguments. To substantiate my claim, I examine the relationship between string theory, noncommutative geometry, and special relativity.
-
5305456.625858
According to the Causal Principle, anything that begins to exist has a cause. In turn, various authors – including Thomas Hobbes, Jonathan Edwards, and Arthur Prior – have defended the thesis that, had the Causal Principle been false, there would be no good explanation for why entities do not begin at arbitrary times, in arbitrary spatial locations, in arbitrary number, or of arbitrary kind. I call this the Hobbes-Edwards- Prior Principle (HEPP). However, according to a view popular among both philosophers of physics and naturalistic metaphysicians – Neo-Russellianism – causation is absent from fundamental physics. I argue that objections based on the HEPP should have no dialectical force for Neo-Russellians. While Neo-Russellians maintain that there is no causation in fundamental physics, they also have good reason to reject the HEPP.
-
5305487.625888
The Born-Oppenheimer Approximation (BOA) is a widely used strategy in quantum chemistry due to its efficacy in its validity scope. Originally proposed by Max Born and J. Robert Oppenheimer (1927) to calculate molecular energy levels, the BOA is now formulated in a substantially different way, and is useful to explain other molecular properties. Recently, Nick Huggett, James Ladyman, and Karim Thébault (2024) published an extensive article (hereinafter referred to as HLT) discussing the BOA. Their primary aim is to express a strong disagreement with our position about the matter, according to which the BOA includes a classical assumption that is incompatible with the Heisenberg Principle. The authors, by contrast, argue that the BOA requires no classical assumption, suggesting that the reduction of chemistry to physics is thereby ensured.
-
5377857.625912
Here is a very plausible pair of claims:
The Son could have become incarnate as a different human being. God foreknew many centuries ahead of time which human being the Son would become incarnate as. Regarding 1, of course, the Son could not have been a different person—the person the Son is and was and ever shall be is the second person of the Trinity. …
-
5387288.625945
With Matthew Adelstein’s kind permission, here’s the transcript of the Adelstein/Huemer conversation on the ethics of insect suffering. Lightly edited by me. 00:37:48 MATTHEW ADELSTEIN
Okay. So, yeah. …
-
5391831.625961
The Pusey-Barrett-Rudolph (PBR) theorem proves that the joint wave function ψ ⊗ψ2 of a composite quantum system is ψ-ontic, representing the system’s physical reality. We present a minimalist proof showing that this result, combined with the tensor product structure assigning ψ1 to subsystem 1 and ψ2 to subsystem 2, directly implies that ψ1 and ψ2 are ψ-ontic for their respective subsystems. This establishes ψ-ontology for single quantum systems without requiring preparation independence or other assumptions. Our proof challenges the widely held view that joint ψ-onticity permits subsystem ψ-epistemicity via correlations, providing a simpler, more direct understanding of the wave function’s ontological status in quantum mechanics.
-
5391853.625977
Predictive processing is an ambitious neurocomputational framework, offering an unified explanation of all cognitive processes in terms of a single computational operation, namely prediction error minimization. Whilst this ambitious unificatory claim has been thoroughly analyzed, less attention has been paid to what predictive processing entails for structure-function mappings in cognitive neuroscience. We argue that, taken at face value, predictive processing entails an all-to-one structure-function mapping, wherein each individual neural structure is assigned the same function, namely minimizing prediction error. Such a structure-function mapping, we show, is highly problematic. For, barring few, rare occasions, such a structure-function mapping fails to play the predictive, explanatory and heuristic roles structure-function mappings are expected to play in cognitive neuroscience. Worse still, it offers a picture of the brain that we know is wrong. For, it depicts the brain as an equipotential organ; an organ wherein structural differences do not correspond to any appreciable functional difference, and wherein each component can substitute for any other component without causing any loss or degradation of functionality. Somewhat ironically, the very neuroscientific roots of predictive processing motivate a form of skepticism concerning the framework’s most ambitious unificatory claims. Do these problems force us to abandon predictive processing? Not necessarily. For, once the assumption that all cognition can be accounted for exclusively in terms of prediction error minimization is relaxed, the problems we diagnosed lose their bite.
-
5391877.626034
A conditional argument is put forth suggesting that if qualia have a functional role in intelligence, then it might be possible, by observing the behavior of verbal AI systems like large language models (LLMs) or other architectures capable of verbal reasoning, to tackle in an empirical way the “strong AI” problem, namely, the possibility that AI systems have subjective experiences, or qualia. The basic premise is that if qualia are functional, and thus have causal roles, then they could affect the production of discourses about qualia and subjective consciousness in general. A thought experiment is put forth envisioning a possible method to probabilistically test the presence of qualia in AI systems based on this conditional argument. The method proposed in the thought experiment focuses on observing whether ideas related to the issue of phenomenal consciousness, such as the so-called “hard problem” of consciousness, or related philosophical issues centered on qualia, spontaneously emerge in extended dialogues involving LLMs specifically trained to be initially oblivious of such philosophical concept and related ones. By observing the emergence (or lack thereof) in the AI’s verbal production of discussions related to phenomenal consciousness in these contexts, the method seeks to provide empirical evidence for or against the existence of consciousness in AI. An outline of a Bayesian test of the hypothesis is provided. Three main investigative methods with different reliability and feasibility aimed at empirically detecting AI consciousness are proposed: one involving human interaction and two fully automated, consisting in multi-agent conversations between machines. The practical and philosophical challenges involved by the idea of transforming the proposed thought experiments into an actual empirical trial are then discussed. In light of these considerations, the proposal put forth in the paper appears to be at least a contribution to computational philosophy in the form of philosophical thought experiments focused on computational systems, aimed at refining our philosophical understanding of consciousness. Hopefully, it could also provide hints toward future empirical investigations into machine consciousness.
-
5458066.626055
Van Inwagen famously raised the Special Composition Question (SCQ): What is an informative criterion for when a proper plurality of objects composes a whole. There is, however, the Reverse Special Composition Question (RSCQ): What is an informative criterion for when an object is composed of a proper plurality? …
-
5461165.626073
I’ve been working on a math project involving the periodic table of elements and the Kepler problem—that is, the problem of a particle moving in an inverse square force law. I started it in 2021, but I just finished. …
-
5467256.626125
In How Intention Matters, I lamented the common myth that concern for people’s intentions and quality of will was inherently “Kantian” or otherwise non-consequentialist. Today we do the same for autonomy. …
-
5476626.626143
Most philosophical discussions of natural kinds concern entities in the category of substance: particles, chemical substances, organisms, etc. But I think we shouldn’t forget that there is good reason to posit natural kinds of entities in other categories. …
-
5478063.626158
In the philosophy of religion, ‘de jure objections’ is an umbrella term that covers a wide variety of arguments for the conclusion that theistic belief is rationally impermissible, whether or not God exists. What we call ‘modal Calvinism’ counters these objections by proposing that ‘if God exists, God would ensure that theistic belief is rationally compelling on a global scale’, a modal conditional that is compatible with atheism. We respond to this modal Calvinist argument by examining it through the lenses of probability, modality, and logic – particularly, we apply analytical tools such as possible world semantics, Bayesian reasoning, and paraconsistent models. After examining various forms of the argument, we argue that none can compel atheists to believe that serious theistic possibilities worth considering would involve the purported divine measure.
-
5478153.626189
Alejandro Bortolus, Chad L. Hewitt, Veli Mitova, Evangelina Schwindt, Temitope O. Sogbanmu, Emelda E. Chukwu, Remco Heesen, Ricardo Kaufer, Hannah Rubin, Mike D. Schneider, Anne Schwenkenbecher, Helena Slanickova, Katie Woolaston, Li-an Yu
Determining appropriate mechanisms for transferring and translating research into policy has become a major concern for researchers (knowledge producers) and policymakers (knowledge users) worldwide. This has led to the emergence of a new function of brokering between researchers and policymakers, and a new type of agent called Knowledge Broker. Understanding these complex multi-agent interactions is critical for an efficient knowledge brokering practice during any given policymaking process. Here we present 1) the current diversity of knowledge broker groups working in the field of biosecurity and environmental management; 2) the incentives linking the different agents involved in the process (knowledge producers, knowledge brokers and knowledge users), and 3) the gaps, needs and challenges to better understand this social ecosystem. We also propose alternatives aimed at improving transparency and efficiency, including future scenarios where the role of artificial intelligence (AI) technologies may become predominant in knowledge-brokering activities.
-
5478211.626229
Diagnosing patients with disorders of consciousness involves inductive risk: the risk of false negative and false positive results when gathering and interpreting evidence of consciousness. A recent proposal suggests mitigating that risk by incorporating patient values into methodological choices at the level of individual diagnostic techniques: when using machine-learning algorithms to detect neural evidence of responsiveness to commands, clinicians should consider the patient’s own preferences about whether avoiding false positives or false negatives takes priority (Birch, 2023). In this paper, I argue that this proposal raises concerns about how to ensure that inevitable non-epistemic value judgments do not outweigh epistemic considerations. Additionally, it comes with challenges related to the predictive accuracy of surrogate decision-makers and the decisional burden imposed on them. Hence, I argue that patient values should not be incorporated at the level of gathering evidence of consciousness, but that they should play the leading role when considering how to respond to that evidence.
-
5541983.626249
We establish the equivalence of two much debated impartiality criteria for social welfare orders: Anonymity and Permutation Invariance. Informally, Anonymity says that, in order to determine whether one social welfare distribution w is at least as good as another distribution v, it suffices to know, for every welfare level, how many people have that welfare level according to w and how many people have that welfare level according to v. Permutation Invariance, by contrast, says that, to determine whether w is at least as good as v, it suffices to know, for every pair of welfare levels, how many people have that pair of welfare levels in w and v respectively.
-
5564613.626264
Recent approaches in quantum gravity suggest that spacetime may not be a fundamental aspect of reality, but rather an emergent phenomenon arising from a more fundamental substratum. This raises a significant challenge for traditional accounts of laws of nature, which are typically grounded in spatiotemporal concepts. This paper discusses two non- Humean strategies for formulating laws of nature in the absence of spacetime: the ’non-temporal evolution’ approach and the ’global constraints’ approach. The argument begins by showing that the latter permits a more naturalistic stance than the former. A tentative defence is then provided against the objection that laws as global constraints are too thin to provide genuine metaphysical intelligibility and explanatory power.
-
5631585.626279
W.D. Hamilton in 1975 wrote a book chapter that constitutes his most extensive comments on human cooperation. In it he flagged the “tribal facies of social behavior” as the problem to be solved. He was well aware of the difficulty of extending his theory of inclusive fitness to the tribal scale. He mentions the idea that cultural processes might be responsible but expresses skepticism that culture could act against genetic fitness imperatives and sought genetic answers to the puzzle. We have explored the potential of culture to generate the stable variation necessary for selection at the level of tribes and other large human groups. We have modeled three forms of cultural group selection, and reviewed the ample empirical evidence that all three forms are important in humans. The reward and punishment systems in human societies can also create social selection on genes underlying human behavior. One of the critical factors in cultural evolution is that it can be faster than genetic evolution. Here we provide a simple model that illustrates why this is important to the evolution of the tribal facies.
-
5649555.626295
Buckminsterfullerene is a molecule shaped like a soccer ball, made of 60 carbon atoms. If one of the bonds between two hexagons rotates, we get a weird mutant version of this molecule:
This is an example of a Stone-Wales transformation: a 90° rotation in a so-called ‘π bond’ between carbon atoms. …
-
5649555.62631
I confess that, when I allow myself to think about it, I am amazed that I understand so little about what it is we philosophers do. I believe I can distinguish good philosophical work from bad—I can recognize when philosophy is done well—but I do not have a clear understanding of what it is that I am recognizing, and when I try actually to say what our discipline does, my remarks turn out to be naive and crude, more like the groping efforts of a beginning student than like the contributions of an advanced scholar to the field. …
-
5727576.626325
Ever since Carlo Rovelli introduced Relational Quantum Mechanics (RQM) to the public [1], it has attracted the interest and stimulated the imagination not only of physicists, but also, and in particular, of philosophers. There are several reasons why that is so. One of them is, quite simply, that a renowned and highly esteemed researcher had offered a new programmatic attempt to make sense of the longstanding puzzles at the foundations of quantum theory, which only happens every so often. What is more, the key to these puzzles was supposed to lie in an essentially conceptual move, in the exposure of an “incorrect notion” [1, p. 1637]. But the modern-day philosopher regards concepts as something like their natural hunting ground. If mention is made of the word, it makes them sit up as if somebody had yelled their name.
-
5727608.62634
The idea that qualities can be had partly or to an intermediate degree is controversial among contemporary metaphysicians, but also has a considerable pedigree among philosophers and scientists. In this paper, we first aim to show that metaphysical sense can be made of this idea by proposing a partial taxonomy of metaphysical accounts of graded qualities, focusing on three particular approaches: one which explicates having a quality to a degree in terms of having a property with an in-built degree, another based on the idea that instantiation admits of degrees, and a third which derives the degree to which a quality is had from the aspects of multidimensional properties. Our second aim is to demonstrate that the choice between these account can make a substantial metaphysical difference. To make this point, we rely on two case studies (involving quantum observables and values) in which we apply the accounts in order to model apparent cases of metaphysical gradedness.
-
5732717.626356
A standard view of reasons is that reasons are propositions or facts that support an action. Thus, that I promised to visit is a reason to visit, that pain is bad is a reason to take an aspirin, and that I am hungry is a reason to eat. …
-
5734547.626371
Journal of the American Philosophical Association () – © The Author(s), . Published by Cambridge University Press on behalf of the American Philosophical Association. This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/.), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.