1. 751155.147167
    Despite nearly a century of development, quantum theories that address the special relativity-quantum mechanics tension still struggle with limited explanatory depth. Given the fundamental differences between the two core theories, this is not surprising. Quantum theories that rely on mathematical constructs to explain particle or quantum state dynamics often struggle to reconcile with special relativity’s constraints in a physical 4D spacetime.
    Found 1 week, 1 day ago on PhilSci Archive
  2. 751183.147298
    Researchers in archaeology explore the use of generative AI (GenAI) systems for reconstructing destroyed artifacts. This paper poses a novel question: can such GenAI systems generate evidence that provides new knowledge about the world or can they only produce hypotheses that we might seek evidence for? Exploring responses to this question, the paper argues that 1) GenAI outputs can at least be understood as higher-order evidence (Parker 2022) and 2) may also produce de novo synthetic evidence.
    Found 1 week, 1 day ago on PhilSci Archive
  3. 751210.147314
    There is a complex interplay between the models in dark matter detection experiments that have led to a difficulty in interpreting the results of the experiments and ascertain whether we have detected the particle or not. The aim of this paper is to categorise and explore the different models used in said experiments, by emphasizing the distinctions and dependencies among different types of models used in this field. With a background theory, models are categorised into four distinct types: background theory, theoretical, phenomenological, experimental and data. This taxonomy highlights how each model serves a unique purpose and operates under varying degrees of independence from their respective frameworks. A key focus is on the experimental model, which is shown to rely on constraints from both data and phenomenological ones. The article argues that while theoretical models provide a backdrop for understanding the nature of dark matter, the experimental models must stand independently, particularly in their methodological approaches. This is done via a discussion of the inherent challenges in dark matter detection, such as inconsistent results and difficulties in cross-comparison, stemming from the diverse modelling approaches.
    Found 1 week, 1 day ago on PhilSci Archive
  4. 768367.147326
    In Part 2, I explained some stuff you can do with graphs whose edges are labeled by elements of a rig. Remember, a rig is like a ring, but it might not have negatives. A great example is the boolean rig, whose elements are truth values: The addition in this rig is ‘or’ and the multiplication is ‘and’. …
    Found 1 week, 1 day ago on Azimuth
  5. 808908.147342
    In this paper, starting from Wittgenstein's "Tractatus Logico-Philosophicus," I analyze the universal structure of objects in contrast to the universal structure of descriptions as analyzed there. This is an attempt to systematize the distinction between descriptions and objects. Furthermore, using this universal structural system of objects as a stepping stone, I prove that the law of excluded middle is erroneous.
    Found 1 week, 2 days ago on PhilSci Archive
  6. 815774.147355
    In the previous post, I showed that Goodman and Quine’s counting method fails for objects that have too much overlap. I think (though the technical parts here are more difficult) that the same is true for their definition of the ancestral or transitive closure of a relation. …
    Found 1 week, 2 days ago on Alexander Pruss's Blog
  7. 833133.147366
    Trump won. Within hours, the pundits had come out. They proposed diagnoses of why he won: institutional failures, cultural backlash, big money, political unoriginality, or luck. They pointed to mistakes: Biden shouldn’t have run again, Harris should’ve gone on Joe Rogan, the Democrats should’ve proposed a clearer vision, and so on. …
    Found 1 week, 2 days ago on Stranger Apologies
  8. 839779.147376
    In my eyes, every election is a trainwreck. Two proudly irrational tribes rally behind two self-congratulatory demagogic mediocrities as if they were the Second Coming. Listening to any “serious” candidate speak is torture. …
    Found 1 week, 2 days ago on Bet On It
  9. 843007.147385
    [The final entry in my series of excerpts from Questioning Beneficence: Four Philosophers on Effective Altruism and Doing Good. This is my response to one of Ryan Davis’s essays, which asks whether effective altruism is compatible with living a meaningful life.] …
    Found 1 week, 2 days ago on Good Thoughts
  10. 981845.147394
    Sociality of science has long been the topic of investigation in science studies and the social constructivist approaches advanced within critical race theory and feminist epistemology. What might be referred to as the sociality turn has also been present in much recent philosophy of science, which has turned attention to the practice of science and the activities of scientists as a target of philosophical study. But what is sociality and to what does it refer?
    Found 1 week, 4 days ago on PhilSci Archive
  11. 981874.147405
    A central challenge for Neuroscience has been understanding how nervous systems flexibly and reliably generate complex behaviors. How does an animal distinguish a benign encounter from a threat? How is irrelevant information ignored to satisfy its needs? Since the days of Pavlov’s salivating dogs or Skinner’s bar pressing rats, behavioral neuroscientists have constructed highly constrained lab paradigms to study how experience modifies relatively simple behaviors. These behaviors give scientists the benefit of precision and control: by manipulating the temporal relations between stimulus and response, neural activity can be directly tied to the behavior. However, these behaviors are also seen as highly contrived in the sense that there are no levers or bells in the habitats in which rats’ and dogs’ brains evolved, which presumably shaped the neural circuits that generate most behaviors.
    Found 1 week, 4 days ago on PhilSci Archive
  12. 981902.147414
    The program of reconstructing quantum theory based on information-theoretic principles enjoys much popularity in the foundations of physics. Surprisingly, this endeavor has only received very little attention in philosophy. Here I argue that this should change. This is because, on the one hand, reconstructions can help us to better understand quantum mechanics, and, on the other hand, reconstructions are themselves in need of interpretation. My overall objective, thus, is to motivate the reconstruction program and to show why philosophers should care. My specific aims are threefold. (i) Clarify the relationship between reconstructing and interpreting quantum mechanics, (ii) show how the informational reconstruction of quantum theory puts pressure on standard realist interpretations, (iii) defend the quantum reconstruction program against possible objections.
    Found 1 week, 4 days ago on PhilSci Archive
  13. 981929.147423
    In quantum foundations, there is growing interest in the program of reconstructing the quantum formalism from clear physical principles. These reconstructions are formulated in an operational framework, deriving the formalism from information-theoretic principles. It has been recognized that this project is in tension with standard ψ-ontic interpretations. This paper presupposes that the quantum reconstruction program (QRP) (i) is a worthwhile project and (ii) puts pressure on ψ-ontic interpretations. Where does this leave us? Prima facie, it seems that ψ-epistemic interpretations perfectly fit the spirit of information-based reconstructions. However, ψ-epistemic interpretations, understood as saying that the wave functions represents one’s knowledge about a physical system, recently have been challenged on technical and conceptual grounds. More importantly, for some researchers working on reconstructions, the lesson of successful reconstructions is that the wave function does not represent objective facts about the world. Since knowledge is a factive concept, this speaks against epistemic interpretations. In this paper, I discuss whether ψ-doxastic interpretations constitute a reasonable alternative. My thesis is that if we want to engage QRP with ψ-doxastic interpretations, then we should aim at a reconstruction that is spelled out in non-factive experiential terms.
    Found 1 week, 4 days ago on PhilSci Archive
  14. 981955.147434
    QBism is currently one of the most widely discussed “subjective” interpretations of quantum mechanics. Its key move is to say that quantum probabilities are personalist Bayesian probabilities and that the quantum state represents subjective degrees of belief. Even probability-one predictions are considered subjective assignments expressing the agent’s highest possible degree of certainty about what they will experience next. For most philosophers and physicists this means that QBism is simply too subjective. Even those who agree with QBism that the wave function should not be reified and that we should look for alternatives to standard ψ-ontic interpretations often argue that QBism must be abandoned because it detaches science from objectivity. The problem is that from the QBist perspective it is hard to see how objectivity could enter science. In this paper, I introduce and motivate an interpretation of quantum mechanics that takes QBism as a starting point, is consistent with all its virtues, but allows objectivity to enter from the get-go. This is the view that quantum probabilities should be understood as objective degrees of epistemic justification.
    Found 1 week, 4 days ago on PhilSci Archive
  15. 981983.147444
    The success of AlphaFold, an AI system that predicts protein structures, poses a challenge for traditional understanding of scientific knowledge. It operates opaquely, generating predictions without revealing the underlying principles behind its predictive success. Moreover, the predictions are largely not empirically tested but are taken at face value for further modelling purposes (e.g. in drug discovery) where experimentation takes place much further down the line. The paper presents a trilemma regarding the epistemology of AlphaFold, whereby we are forced to reject one of 3 claims: (1) AlphaFold produces scientific knowledge; (2) Predictions alone are not scientific knowledge unless derivable from established scientific principles; and (3) Scientific knowledge cannot be strongly opaque. The paper argues that AlphaFold's predictions function as scientific knowledge due to their trustworthiness and functional integration into scientific practice. The paper addresses the key challenge of strong opacity by drawing on Alexander Bird's functionalist account of scientific knowledge as irreducibly social, and advances the position against individual knowledge being necessary for the production of scientific knowledge. It argues that the implicit principles used by AlphaFold satisfy the conditions for scientific knowledge, despite their opacity. Scientific knowledge can be strongly opaque to humans, as long as it is properly functionally integrated into the collective scientific enterprise.
    Found 1 week, 4 days ago on PhilSci Archive
  16. 1019305.147453
    Thomas Nagel’s (1998) ‘Concealment and Exposure’ is one of my favorite philosophy papers. His account of the value of reticence is a must-read for anyone who might otherwise find “radical honesty” tempting: The first and most obvious thing to note about many of the most important forms of reticence is that they are not dishonest, because the conventions that govern them are generally known. …
    Found 1 week, 4 days ago on Good Thoughts
  17. 1092957.147462
    In yesterday’s post, I noted that Goodman and Quine’s nominalist mereological definition of what it is to say that there are more cats than dogs fails if there are cats that are conjoint twins. This raises the question whether there is some other way of using the same ontological resources to generate a definition of “more” that works for overlapping objects as well. …
    Found 1 week, 5 days ago on Alexander Pruss's Blog
  18. 1097337.147472
    In a recent critique of Kuorikoski, Lehtinen and Marchionni’s (2010) analysis of derivational robustness, Margherita Harris (2021) argued that the proposed independence condition is not credible. While this criticism is cogent, it does not challenge the incremental epistemic benefits from robustness, as they do not hinge on satisfying independence conditions. Distinguishing between incremental increases and a high absolute degree of confidence in a result is crucial: the latter requires demonstrating the independence of every false assumption.
    Found 1 week, 5 days ago on PhilSci Archive
  19. 1108737.147481
    Economists distinguish different kinds of goods. First-year students generally learn in their microeconomic (or economic principles) class the distinction between normal, inferior (or Giffen), and superior (or Veblen) goods. …
    Found 1 week, 5 days ago on The Archimedean Point
  20. 1126454.147512
    Absence is peculiar, yet an important notion in ontology as well as semantics. Absence contrasts with presence and as such the notion has been discusses in the context of truthmaker semantics, as the absence of a truthmaker of a sentence S and thus and thus the truthmaker of the negation of S. Absence then is, roughly, the negation of presence. But absence contrasts not only with presence. There is a stronger notion of absence on which absence of a thing presupposes that that thing should have been there, to make something else complete. Absence in that sense is a modal notion that crucially involve the notion of completion. This notion is the one reflected linguistically in the semantics of what I will call ‘completion-related predicates of absence’. In English, these are lack and be missing, as below: (1) a. The house lacks a door. b. A screw is missing (from the chair).
    Found 1 week, 6 days ago on Friederike Moltmann's site
  21. 1155012.14753
    Generative artificial intelligence (AI) applications based on large language models have not enjoyed much success in symbolic processing and reasoning tasks, thus making them of little use in mathematical research. However, recently DeepMind’s AlphaProof and AlphaGeometry 2 applications have recently been reported to perform well in mathematical problem solving. These applications are hybrid systems combining large language models with rule-based systems, an approach sometimes called neuro-symbolic AI. In this paper, I present a scenario in which such systems are used in research mathematics, more precisely in theorem proving. In the most extreme case, such a system could be an autonomous automated theorem prover (AATP), with the potential of proving new humanly interesting theorems and even presenting team in research papers. The use of such AI applications would be transformative to mathematical practice and demand clear ethical guidelines. In addition to that scenario, I identify other, less radical, uses of generative AI in mathematical research. I analyse how guidelines set for ethical AI use in scientific research can be applied in the case of mathematics, arguing that while there are many similarities, there is also a need for mathematics-specific guidelines.
    Found 1 week, 6 days ago on PhilSci Archive
  22. 1155115.147552
    In recent times, the exponential growth of sequenced genomes and structural knowledge of proteins, as well as the development of computational tools and controlled vocabularies to deal with this growth, has fueled a demand for conceptual clarification regarding the concept of function in molecular biology. In this article, we will attempt to develop an account of function fit to deal with the conceptual/philosophical problems in that domain, but which can be extended to other areas of biology. To provide this account, we will argue for three theses: (1) some authors have confused metatheoretical issues (about the meaning and application criteria of terms) with metaphysical ones (about teleology); this led them to (2) look for explicit definitions of “function”, in terms of necessary and sufficient criteria of application, in order to make the concept of function eliminable; however, (3) if one leaves metaphysical worries aside and focuses on functional attribution practices, it is more adequate to say that the concept of function has an open texture. That is, that a multiplicity of application criteria is available, none of which is sufficient nor necessary to attribute a function to a trait, and which only in concert form a clear picture. We distinguish this thesis from some usual forms of pluralism. Finally, we will illustrate this account with a historical reconstruction of the ascription of a water transport function to aquaporins.
    Found 1 week, 6 days ago on PhilSci Archive
  23. 1170737.147568
    Goodman and Quine have a clever way of saying that there are more cats than dogs without invoking sets, numbers or other abstracta. The trick is to say that x is a bit of y if x is a part of y and x is the same size as the smallest of the dogs and cats. …
    Found 1 week, 6 days ago on Alexander Pruss's Blog
  24. 1212565.14759
    Did you know students at Oxford in 1335 were solving problems about objects moving with constant acceleration? This blew my mind. Medieval scientists were deeply confused about the connection between force and velocity: it took Newton to realize force is proportional to acceleration. …
    Found 2 weeks ago on Azimuth
  25. 1212566.147604
    In this post, I consider the questions posed for my (October 9) Neyman Seminar by Philip Stark, Distinguished Professor Statistics at UC Berkeley. We didn’t directly deal with them during the discussion, and I find some of them a bit surprising. …
    Found 2 weeks ago on D. G. Mayo's blog
  26. 1212724.14762
    The universal conception of necessity says that necessary truth is truth in all possible worlds. This idea is well studied in the context of classical possible worlds models, and there its logic is S5. The universal conception of necessity is less well studied in models for non-classical logics. We will present some preliminary results on universal necessity on models for intuitionistic logic, first-degree entailment, and relevant logics. We will close by discussing a way in which universal necessity is a very classical concept.
    Found 2 weeks ago on Shawn Standefer's site
  27. 1212752.147633
    A challenge for relevant logicians is to delimit their area of study. I propose and explore the definition of a relevant logic as a logic satisfying a variable-sharing property and closed under detachment and adjunction. This definition is, I argue, a good definition that captures many familiar logics and raises interesting new questions concerning relevant logics. As is familiar to readers of Entailment or Relevant Logics and Their Rivals, the motivations for relevant logics have a strong intuitive pull. The philosophical picture put forward by Anderson and Belnap (1975), for example, is compelling and has led to many fruitful developments. With some practice, one can develop a feel for what sorts of axioms or rules lead to violations of relevance in standard relevant logics. These sorts of intuitions only go so far, as some principles that lead to violations of relevance in stronger logics are compatible with it in weaker logics. There is a large number of relevant logics, but there is not much discussion of precise characterizations of the class of relevant logics.
    Found 2 weeks ago on Shawn Standefer's site
  28. 1270428.147646
    We carry out a quantitative Bayesian analysis of the evolution of credences in low energy supersymmetry (SUSY) in light of the most relevant empirical data. The analysis is based on the assumption that observers apply principles of optimism or pessimism about theory building in a coherent way. On this basis, we provide a rough assessment of the current range of plausible credences in low energy SUSY and determine in which way LHC data changes those credences. For observers who had been optimistic about low energy SUSY before the LHC, the method reports that LHC data does lead to decreased credences in accordance with intuition. The decrease is moderate, however, and keeps posteriors at very substantial levels. The analysis further establishes that a very high but not yet indefensible degree of pessimism regarding the success chances of theory building still results in quite significant credences in GUT and low energy SUSY for the time right before the start of the LHC. The pessimist’s credence in low energy SUSY remains nearly unchanged once LHC data is taken into account.
    Found 2 weeks ago on PhilSci Archive
  29. 1270456.147659
    Recent arguments for the physicality of pure quantum states revive ontic interpretations of the wave function. The resulting proposals describe radically different worlds and make divergent predictions but not experimentally accessible ones as the technology stands—in effective terms, the interpretations are empirically equivalent, ruining the prospects of realist interpretation. One response, Partial Realism (PR), limits commitment to theoretical convergences at intermediate theoretical descriptive levels. PR looks for shared theoretical claims among the competing programs, In the quantum mechanical case, it looks for shared theoretical claims among the competing programs about, e.g., the psi state, micro-spatial structures, and the Bohr model. However, critics (notably Callender 2020) object that the common contents identified are meager. The objections include that the quantum state is the same only approximately; the shared micro-spatial structures hailed by realists are not quantum results and thus cannot help PR; the same goes for theoretical parts such as the orbits derived from Bohr’s model, which rest on semiclassical theories; also raised are qualms about Bohmian realist accounts of reflection/transmission coefficients in the tunneling effect. Results such as these lead Callender to dismiss the PR strategy. This paper challenges his arguments and defends the strategy.
    Found 2 weeks ago on PhilSci Archive
  30. 1270486.147674
    What do we mean when we diagnose a patient with a disease? What does it mean to say that two people have the same disease? In this paper, I argue that diseases are natural kinds, using a conception of kinds derived from John Stuart Mill and Ruth Millikan. I demonstrate that each disease is a natural kind and that the shared properties occur as a result of the pathogenesis of the disease. I illustrate this with diverse examples from internal medicine and compare my account to alternative ontologies.
    Found 2 weeks ago on PhilSci Archive