1. 9447.395842
    We show that the dynamical common core of the recently-discovered non-relativistic geometric trinity of gravity is Maxwell gravitation. Moreover, we explain why no analogous distinct dynamical common core exists in the case of the better-known relativistic geometric trinity of gravity.
    Found 2 hours, 37 minutes ago on PhilSci Archive
  2. 29486.396143
    The news these days feels apocalyptic to me—as if we’re living through, if not the last days of humanity, then surely the last days of liberal democracy on earth. All the more reason to ignore all of that, then, and blog instead about the notorious Busy Beaver function! …
    Found 8 hours, 11 minutes ago on Scott Aaronson's blog
  3. 69526.396158
    Human Technology Interaction Eindhoven University of Technology *[Some earlier posts by D. Lakens on this topic are listed at the end of part 2, forthcoming this week] How were we supposed to move beyond p < .05, and why didn’t we? …
    Found 19 hours, 18 minutes ago on D. G. Mayo's blog
  4. 215395.396176
    The scaling hypothesis in artificial intelligence claims that a model’s cognitive ability scales with increased compute. This hypothesis has two interpretations: a weak version where model error rates decrease as a power law function of compute, and a strong version where as error rates decrease new cognitive abilities unexpectedly emerge. We argue that the first is falsifiable but the second is not because it fails to make exact predictions about which abilities emerge and when. This points to the difficulty of measuring cognitive abilities in algorithms since we lack good ecologically valid measurements of those abilities.
    Found 2 days, 11 hours ago on PhilSci Archive
  5. 221302.396183
    Confirmation bias has been widely studied for its role in failures of reasoning. Individuals exhibiting confirmation bias fail to engage with information that contradicts their current beliefs, and, as a result, can fail to abandon inaccurate beliefs. But although most investigations of confirmation bias focus on individual learning, human knowledge is typically developed within a social structure. We use network models to show that moderate confirmation bias often improves group learning. However, a downside is that a stronger form of confirmation bias can hurt the knowledge-producing capacity of the community.
    Found 2 days, 13 hours ago on Cailin O’Connor's site
  6. 297867.396196
    The Good Regulator Theorem and the Internal Model Principle are sometimes cited as mathematical proofs that an agent needs an internal model of the world in order to have an optimal policy. However, these principles rely on a definition of “internal model” that is far too permissive, applying even to cases of systems that do not use an internal model. As a result, these principles do not provide evidence (let alone a proof) that internal models are necessary. The paper also diagnoses what is missing in the GRT and IMP definitions of internal model, which is that models need to make predictions that represent variables in the target system (and these representations need to be usable by an agent so as to guide behavior).
    Found 3 days, 10 hours ago on PhilSci Archive
  7. 379907.396204
    Problems in moral philosophy and philosophy of religion can take on new forms in light of contemporary physical theories. Here we discuss how the problem of evil is transformed by the Everettian “Many-Worlds” theory of quantum mechanics. We first present an Everettian version of the problem and contrast it to the problem in single-universe physical theories such as Newtonian mechanics and Bohmian mechanics. We argue that, pace Turner (2016) and Zimmerman (2017), the Everettian problem of evil is no more extreme than the Bohmian one. The existence and multiplicity of (morally) terrible branches in the Everettian multiverse in contrast to the mere possibility of them in the Bohmian universe does not entail there is “more evil” in the former than in the latter. Low probability in the Bohmian case and low branch weight in the Everettian case should modulate how we respond to them in exactly the same way. We suggest that the same applies to the divine decision of creating an Everettian multiverse. For an empirically adequate Everettian quantum mechanics that justifies the Born rule, there is no special problem of evil. In order for there to be a special Everettian problem of evil, the Everettian interpretation must already have been exposed to decisive refutation. In the process, we hope to show how attention to the details of physical and metaphysical theories can and should impact the way we think about problems in moral philosophy and philosophy of religion.
    Found 4 days, 9 hours ago on Eddy Keming Chen's site
  8. 396282.396211
    Among the various proposals for quantum ontology, both wavefunction realists and the primitive ontologists have argued that their approach is to be preferred because it relies on intuitive notions: locality, separability and spatiotemporality. As such, these proposals should be seen as normative frameworks asserting that one should choose the fundamental ontology which preserves these intuitions, even if they disagree about their relative importance: wavefunction realists favor preserving locality and separability, while primitive ontologists advocate for spatiotemporality. In this paper, first I clarify the main tenets of wavefunction realism and the primitive ontology approach, arguing that seeing the latter as favoring constructive explanation makes sense of their requirement of a spatiotemporal ontology. Then I show how the aforementioned intuitive notions cannot all be kept in the quantum domain. Consequently, wavefunction realists rank locality and separability higher than spatiotemporality, while primitive ontologists do the opposite. I conclude that however, the choice of which notions to favor is not as arbitrary as it might seem. In fact, they are not independent: requiring locality and separability can soundly be justified by requiring spatiotemporality, and not the other way around. If so, the primitive ontology approach has a better justification of its intuitions than its rival wavefunction realist framework.
    Found 4 days, 14 hours ago on Valia Allori's site
  9. 413257.39622
    I review a counterexample to the frequent claim that discrepancies among observers resulting from conventional quantum theory’s inability to define “measurement,” such as those arising in the Wigner’s Friend thought experiment, remain private and incom-mensurable. I consider the implications for a recent attempt to shield Relational Quantum Mechanics from such inconsistencies and conclude that it is not successful.
    Found 4 days, 18 hours ago on PhilSci Archive
  10. 416183.396226
    If morality and self-interest don’t always coincide—if sometimes doing what’s right isn’t also best for you—morality can sometimes require you to do what will be worse for you or to forgo an act that would benefit you. But some philosophers think a reasonable morality can’t be too demanding in this sense and have proposed moral views that are less so.
    Found 4 days, 19 hours ago on Stanford Encyclopedia of Philosophy
  11. 460766.396232
    June 27, 2024 abstract. Causal decision theorists are vulnerable to a money pump if they update by conditioning when they learn what they have chosen. Nevertheless, causal decision theorists are immune to money pumps if they instead update by imaging on their choices and by conditioning on other things (and, in addition, evaluate plans rather choices). I also show that David Lewis’s Dutch-book argument for conditioning does not work when you update on your choices. Even so, a collective of causal decision theorists are still exploitable even if they start off with the same preferences and the same credences and will all see the same evidence. Evidential decision theorists who consistently update by conditioning are not exploitable in this way.
    Found 5 days, 7 hours ago on Johan E. Gustafsson's site
  12. 466689.39624
    Albert Einstein, reported by Ernst Strauss understanding of determinism, its embodiments in concrete physical theories, and its relevance to long-standing issues in philosophy. Moreover, we have seen a growing interest in super-determinism. In contrast, strong determinism has received little attention. In this paper, I want to examine what it is and how it impacts some of the central issues in metaphysics and philosophy of science. Strong determinism, according to Penrose [1989], is “not just a matter of the future being determined by the past; the entire history of the universe is fixed, according to some precise mathematical scheme, for all time” (emphasis original, p. 432). This definition, I argue, risks trivializing the distinction between determinism and strong determinism. My first task is to define strong determinism in terms of fundamental laws: a strongly deterministic theory of physics is one that, according to its fundamental laws, permits exactly one nomologically possible world; our world is strongly deterministic just in case it is the only nomologically possible world. Importantly, we expect fundamental laws to be simple, which partly explains why strong determinism is difficult to achieve.
    Found 5 days, 9 hours ago on Philosopher's Imprint
  13. 528768.396252
    I consider the classical (i.e., non-relativistic) limit of Teleparallel Gravity, a relativistic theory of gravity that is empirically equivalent to General Relativity and features torsional forces. I show that as the speed of light is allowed to become infinite, Teleparallel Gravity reduces to Newtonian Gravity without torsion. I compare these results to the torsion-free context and discuss their implications on the purported underdetermination between Teleparallel Gravity and General Relativity. I conclude by considering alternative approaches to the classical limit developed in the literature.
    Found 6 days, 2 hours ago on PhilSci Archive
  14. 528809.396261
    The field of Artificial Intelligence (AI) safety evaluations aims to test AI behavior for problematic capabilities like deception. However, some scientists have cautioned against the use of behavior to infer general cognitive abilities because of the human tendency to overattribute cognition to everything. They recommend the adoption of a heuristic to avoid these errors that states behavior provides no evidence for cognitive capabilities unless there is some theoretical feature present to justify that inference.
    Found 6 days, 2 hours ago on PhilSci Archive
  15. 528840.396267
    Recent experimental advances suggest we may soon be able to probe the gravitational field of a mass in a coherent superposition of position states—a system which is widely believed to lie outside the scope of classical and semiclassical gravity. The recent theoretical literature has applied the idea of quantum reference frames (QRFs), originally introduced for non-gravitational contexts, to such a scenario.
    Found 6 days, 2 hours ago on PhilSci Archive
  16. 528962.396274
    We define a notion of inaccessibility of a decision between two options represented by utility functions, where the decision is based on the order of the expected values of the two utility functions. The inaccessibility expresses that the decision cannot be obtained if the expectation values of the utility functions are calculated using the conditional probability defined by a prior and by partial evidence about the probability that determines the decision. Examples of inaccessible decisions are given in finite probability spaces. Open questions and conjectures about inaccessibility of decisions are formulated. The results are interpreted as showing the crucial role of priors in Bayesian taming of epistemic uncertainties about probabilities that determine decisions based on utility maximizing.
    Found 6 days, 2 hours ago on PhilSci Archive
  17. 529020.39628
    Postgraduate research training in the United Kingdom often narrowly focuses on domain-specific methods, neglecting wider philosophical topics such as epistemology and scientific method. Consequently, we designed a workshop on (inductive, deductive, and abductive) inference for postgraduate researchers. We ran the workshop three times with (N = 29) attendees from across four universities, testing the potential benefits of the workshop in a mixed-method, repeated measures design. Our core aims were to investigate what attendees learned from the workshop, and whether they felt it had impacted on their research practices six months later. Overall, learning inferential logic benefitted postgraduate researchers in various ways and to varying degrees. Six months on, roughly half of attendees reported being more critical of key aspects of research such as inferences and study design. Additionally, some attendees reported more subtle effects, such as prompting new lines of thought and inquiry. Given that self-criticism and scepticism are fundamental intellectual virtues, these results evidence the importance of embedding epistemological training into doctoral programmes across the UK.
    Found 6 days, 2 hours ago on PhilSci Archive
  18. 583472.396288
    Department of Statistical Sciences “Paolo Fortunati” University of Bologna [An earlier post by C. Hennig on this topic: Jan 9, 2022: The ASA controversy on P-values as an illustration of the difficulty of statistics] Statistical tests in five random research papers of 2024, and related thoughts on the “don’t say significant” initiative This text follows an invitation to write on “abandon statistical significance 5 years on”, so I decided to do a tiny bit of empirical research. …
    Found 6 days, 18 hours ago on D. G. Mayo's blog
  19. 625713.396295
    Artificial General Intelligence (AGI) is said to pose many risks, be they catastrophic, existential and otherwise. This paper discusses whether the notion of risk can apply to AGI, both descriptively and in the current regulatory framework. The paper argues that current definitions of risk are ill-suited to capture supposed AGI existential risks, and that the risk-based framework of the EU AI Act is inadequate to deal with truly general, agential systems.
    Found 1 week ago on Federico L. G. Faroldi's site
  20. 632741.396303
    In quantum field theory, Hamiltonians contain particle creation and annihilation terms that are usually ultraviolet (UV) divergent. It is well known that these divergences can sometimes be removed by adding counter-terms and by taking limits in which a UV cutoff tends toward infinity. Here, I review a novel way of removing UV divergences: by imposing a type of boundary condition on the wave function. These conditions, called interior-boundary conditions (IBCs), relate the values of the wave function at two configurations linked by the creation or annihilation of a particle. They allow for a direct definition of the Hamiltonian without renormalization or limiting procedures. In the last section, I review another boundary condition that serves to determine the probability distribution of detection times and places on a time-like 3-surface.
    Found 1 week ago on R. Tumulka's site
  21. 638766.39631
    We report on the mechanization of (preference-based) conditional normative reasoning. Our focus is on ˚Aqvist’s system E for conditional obligation, and its extensions. Our mechanization is achieved via a shallow semantical embedding in Isabelle/HOL. We consider two possible uses of the framework. The first one is as a tool for meta-reasoning about the considered logic. We employ it for the automated verification of deontic correspondences (broadly conceived) and related matters, analogous to what has been previously achieved for the modal logic cube. The equivalence is automatically verified in one direction, leading from the property to the axiom. The second use is as a tool for assessing ethical arguments. We provide a computer encoding of a well-known paradox (or impossibility theorem) in population ethics, Parfit’s repugnant conclusion. While some have proposed overcoming the impossibility theorem by abandoning the presupposed transitivity of “better than,” our formalisation unveils a less extreme approach, suggesting among other things the option of weakening transitivity suitably rather than discarding it entirely. Whether the presented encoding increases or decreases the attractiveness and persuasiveness of the repugnant conclusion is a question we would like to pass on to philosophy and ethics.
    Found 1 week ago on X. Parent's site
  22. 702155.396319
    In his The Road to Reality as well as in his Fashion, Faith and Fantasy, Roger Penrose criticises string theory and its practitioners from a variety of angles ranging from conceptual, technical, and methodological objections to sociological observations about the string theoretic scientific community. In this article, we assess Penrose’s conceptual/technical objections to string theory, focussing in particular upon those which invoke the notion of ‘functional freedom’. In general, we do not find these arguments to be successful.
    Found 1 week, 1 day ago on PhilSci Archive
  23. 702178.396328
    When is it explanatorily better to adopt a conjunction of explanatory hypotheses as opposed to committing to only some of them? Although conjunctive explanations are inevitably less probable than less committed alternatives, we argue that the answer is not ‘never’. This paper provides an account of the conditions under which explanatory considerations warrant a preference for less probable, conjunctive explanations. After setting out four formal conditions that must be met by such an account, we consider the shortcomings of several approaches. We develop an account that avoids these shortcomings and then defend it by applying it to a well-known example of explanatory reasoning in contemporary science.
    Found 1 week, 1 day ago on PhilSci Archive
  24. 702265.396334
    In this work we argue against the interpretation that underlies the “Standard” account of Quantum Mechanics (SQM) that was established during the 1930s by Niels Bohr and Paul Dirac. Ever since, following this orthodox narrative, physicists have dogmatically proclaimed –quite regardless of the deep contradictions and problems– that the the theory of quanta describes a microscopic realm composed of elementary particles (such as electrons, protons and neutrons) which underly our macroscopic world composed of tables, chairs and dogs. After critically addressing this atomist dogma still present today in contemporary (quantum) physics and philosophy, we present a new understanding of quantum individuals defined as the minimum set of relations within a specific degree of complexity capable to account for all relations within that same degree. In this case, quantum individuality is not conceived in absolute terms but –instead– as an objectively relative concept which even though depends of the choice of bases and factorizations remain nonetheless part of the same invariant representation.
    Found 1 week, 1 day ago on PhilSci Archive
  25. 759950.39634
    Standard textbooks on quantum mechanics present the theory in terms of Hilbert spaces over the …eld of complex numbers and complex linear operator algebras acting on these spaces. What would be lost (or gained) if a different scalar …eld, e.g. the real numbers or the quaternions, were used? This issue arose with the birthing of the new quantum theory, and over the decades it has been raised over and over again, drawing a variety of different opinions. Here I attempt to identify and to clarify some of the key points of contention, focusing especially on procedures for complexifying real Hilbert spaces and real algebras of observables.
    Found 1 week, 1 day ago on PhilSci Archive
  26. 759971.396346
    Teleparallel gravity shares many qualitative features with general relativity, but differs from it in the following way: whereas in general relativity, gravitation is a manifestation of space-time curvature, in teleparallel gravity, spacetime is (always) flat. Gravitational effects in this theory arise due to spacetime torsion. It is often claimed that teleparallel gravity is an equivalent reformulation of general relativity. In this paper we question that view. We argue that the theories are not equivalent, by the criterion of categorical equivalence and any stronger criterion, and that teleparallel gravity posits strictly more structure than general relativity.
    Found 1 week, 1 day ago on PhilSci Archive
  27. 831966.396356
    What should morally conscientious agents do if they must choose among options that are somewhat right and somewhat wrong? Should you select an option that is right to the highest degree, or would it perhaps be more rational to choose randomly among all somewhat right options? And how should lawmakers and courts address behavior that is neither entirely right nor entirely wrong? In this first book-length discussion of the “gray area” in ethics, Martin Peterson challenges the assumption that rightness and wrongness are binary properties. He argues that some acts are neither entirely right nor entirely wrong, but rather a bit of both. Including discussions of white lies and the permissibility of abortion, Peterson’s book presents a gradualist theory of right and wrong designed to answer pressing practical questions about the gray area in ethics.
    Found 1 week, 2 days ago on Martin Peterson's site
  28. 857704.39637
    Suppose that an informant (test, expert, device, perceptual system, etc.) is unlikely to err when pronouncing on a particular subject matter. When this is so, it might be tempting to defer to that informant when forming beliefs about that subject matter. How is such an inferential process expected to fare in terms of truth (leading to true beliefs) and evidential fit (leading to beliefs that fit one’s total evidence)? Using a medical diagnostic test as an example, we set out a formal framework to investigate this question. We establish seven results and make one conjecture. The first four results show that when the test’s error probabilities are low, the process of deferring to the test can score well in terms of (i) both truth and evidential fit, (ii) truth but not evidential fit, (iii) evidential fit but not truth, or (iv) neither truth nor evidential fit. Anything is possible. The remaining results and conjecture generalize these results in certain ways. These results are interesting in themselves—especially given that the diagnostic test is not sensitive to the target disease’s base rate—but also have broader implications for the more general process of deferring to an informant. Additionally, our framework and diagnostic example can be used to create test cases for various reliabilist theories of inferential justification. We show, for example, that they can be used to motivate evidentialist process reliabilism over process reliabilism.
    Found 1 week, 2 days ago on Michael Roche's site
  29. 931913.396376
    Ordinal, interval, and ratio scales are discussed and arguments for the thesis that “better than” comparisons reside on interval or ratio scales are laid out. It is argued that linguistic arguments are not conclusive since alternative rank-based definitions can be given, and that in general “better than” comparisons do not have a common scale type. Some comparison dimensions reside on ratio scales, whereas others do not show any evidence of lying on a scale stronger than an ordinal scale.
    Found 1 week, 3 days ago on Erich Rast's site
  30. 936862.396382
    I show really be done with Integrated Information Theory (IIT), in Aaronson’s simplified formulation, but I noticed a rather interesting difficult. In my previous post on the subject, I noticed that a double grid system where there are two grids stacked on top of one another, with the bottom grid consisting of inputs and the upper grid of outputs, and each upper value being the logical OR of the (up to) five neighboring input values will be conscious according to IIT if all the values are zero and the grid is large enough. …
    Found 1 week, 3 days ago on Alexander Pruss's Blog