1. 1384005.3163
    Did you know students at Oxford in 1335 were solving problems about objects moving with constant acceleration? This blew my mind. Medieval scientists were deeply confused about the connection between force and velocity: it took Newton to realize force is proportional to acceleration. …
    Found 2 weeks, 2 days ago on Azimuth
  2. 1384006.316447
    In this post, I consider the questions posed for my (October 9) Neyman Seminar by Philip Stark, Distinguished Professor Statistics at UC Berkeley. We didn’t directly deal with them during the discussion, and I find some of them a bit surprising. …
    Found 2 weeks, 2 days ago on D. G. Mayo's blog
  3. 1384164.316472
    The universal conception of necessity says that necessary truth is truth in all possible worlds. This idea is well studied in the context of classical possible worlds models, and there its logic is S5. The universal conception of necessity is less well studied in models for non-classical logics. We will present some preliminary results on universal necessity on models for intuitionistic logic, first-degree entailment, and relevant logics. We will close by discussing a way in which universal necessity is a very classical concept.
    Found 2 weeks, 2 days ago on Shawn Standefer's site
  4. 1384192.316488
    A challenge for relevant logicians is to delimit their area of study. I propose and explore the definition of a relevant logic as a logic satisfying a variable-sharing property and closed under detachment and adjunction. This definition is, I argue, a good definition that captures many familiar logics and raises interesting new questions concerning relevant logics. As is familiar to readers of Entailment or Relevant Logics and Their Rivals, the motivations for relevant logics have a strong intuitive pull. The philosophical picture put forward by Anderson and Belnap (1975), for example, is compelling and has led to many fruitful developments. With some practice, one can develop a feel for what sorts of axioms or rules lead to violations of relevance in standard relevant logics. These sorts of intuitions only go so far, as some principles that lead to violations of relevance in stronger logics are compatible with it in weaker logics. There is a large number of relevant logics, but there is not much discussion of precise characterizations of the class of relevant logics.
    Found 2 weeks, 2 days ago on Shawn Standefer's site
  5. 1441868.316504
    We carry out a quantitative Bayesian analysis of the evolution of credences in low energy supersymmetry (SUSY) in light of the most relevant empirical data. The analysis is based on the assumption that observers apply principles of optimism or pessimism about theory building in a coherent way. On this basis, we provide a rough assessment of the current range of plausible credences in low energy SUSY and determine in which way LHC data changes those credences. For observers who had been optimistic about low energy SUSY before the LHC, the method reports that LHC data does lead to decreased credences in accordance with intuition. The decrease is moderate, however, and keeps posteriors at very substantial levels. The analysis further establishes that a very high but not yet indefensible degree of pessimism regarding the success chances of theory building still results in quite significant credences in GUT and low energy SUSY for the time right before the start of the LHC. The pessimist’s credence in low energy SUSY remains nearly unchanged once LHC data is taken into account.
    Found 2 weeks, 2 days ago on PhilSci Archive
  6. 1441896.316521
    Recent arguments for the physicality of pure quantum states revive ontic interpretations of the wave function. The resulting proposals describe radically different worlds and make divergent predictions but not experimentally accessible ones as the technology stands—in effective terms, the interpretations are empirically equivalent, ruining the prospects of realist interpretation. One response, Partial Realism (PR), limits commitment to theoretical convergences at intermediate theoretical descriptive levels. PR looks for shared theoretical claims among the competing programs, In the quantum mechanical case, it looks for shared theoretical claims among the competing programs about, e.g., the psi state, micro-spatial structures, and the Bohr model. However, critics (notably Callender 2020) object that the common contents identified are meager. The objections include that the quantum state is the same only approximately; the shared micro-spatial structures hailed by realists are not quantum results and thus cannot help PR; the same goes for theoretical parts such as the orbits derived from Bohr’s model, which rest on semiclassical theories; also raised are qualms about Bohmian realist accounts of reflection/transmission coefficients in the tunneling effect. Results such as these lead Callender to dismiss the PR strategy. This paper challenges his arguments and defends the strategy.
    Found 2 weeks, 2 days ago on PhilSci Archive
  7. 1441926.316542
    What do we mean when we diagnose a patient with a disease? What does it mean to say that two people have the same disease? In this paper, I argue that diseases are natural kinds, using a conception of kinds derived from John Stuart Mill and Ruth Millikan. I demonstrate that each disease is a natural kind and that the shared properties occur as a result of the pathogenesis of the disease. I illustrate this with diverse examples from internal medicine and compare my account to alternative ontologies.
    Found 2 weeks, 2 days ago on PhilSci Archive
  8. 1499629.316568
    Why are quantum correlations so puzzling? A standard answer is that they seem to require either nonlocal influences or conspiratorial coincidences. This suggests that by embracing nonlocal influences we can avoid conspiratorial fine-tuning. But that’s not entirely true. Recent work, leveraging the framework of graphical causal models, shows that even with nonlocal influences, a kind of fine-tuning is needed to recover quantum correlations. This fine-tuning arises because the world has to be just so as to disable the use of nonlocal influences to signal, as required by the no-signaling theorem. This places an extra burden on theories that posit nonlocal influences, such as Bohmian mechanics, of explaining why such influences are inaccessible to causal control. I argue that Everettian Quantum Mechanics suffers no such burden. Not only does it not posit nonlocal influences, it operates outside the causal models framework that was presupposed in raising the fine-tuning worry. Specifically, it represents subsystems with density matrices instead of random variables. This allows it to sidestep all the results (including EPR and Bell) that put quantum correlations in tension with causal models. However, this doesn’t mean one must abandon causal reasoning altogether in a quantum world. When decoherence is rampant and there’s no controlled entanglement, Everettian Quantum Mechanics licenses our continued use of standard causal models. When controlled entanglement is present—such as in Bell-type experiments—we can employ recently-proposed quantum causal models that are consistent with Everettian Quantum Mechanics. We never need invoke any kind of non-local influence or any kind of fine-tuning.
    Found 2 weeks, 3 days ago on PhilSci Archive
  9. 1499661.316584
    It is now widely accepted that science requires non-epistemic values in a variety of ways. One major strand of research in the values in science literature is what has recently been dubbed the ‘new demarcation problem’, which aims to distinguish between the appropriate and inappropriate ways in which non-epistemic values can influence science (Holman & Wilholt, 2022). Public policies – government policies backed by state coercion – are binding to the public only if they are politically legitimate, and science advising is a major part of many public policies, e.g. climate and public health policies. Thus, we need an account of when public policies are politically legitimate and how science and science policy advising contributes to politically legitimate policies. We need to know what values, including non-epistemic values, are appropriate for science and science policy advising such that they contribute to political legitimacy. To address these issues, philosophers of science must engage with political philosophy (Schroeder, 2022d). The leading approaches thus far appeal to the values of liberal democracy, public participation, and/or a deliberative democracy account of political legitimacy
    Found 2 weeks, 3 days ago on PhilSci Archive
  10. 1499687.316598
    Beyond the obvious technical difficulties, human attempts to communicate with hypothetical Extra- Terrestrial Intelligences also present a number of philosophical puzzles. After all, an alien intelligence is likely the closest thing to a Wittgensteinian lion humanity could ever encounter. In this paper I advance a new challenge for the feasibility of communication with extra-terrestrials. The problem I raise is a practical problem that falls out of the history and philosophy of mathematics and the implementation of METI projects – specifically, the semiprime self-decryption schema of the Drake Pictures message strategy. The Drake Pictures strategy presumes that aliens share the concept ‘prime number’ with us, as understanding that concept is necessary to decrypt our message.
    Found 2 weeks, 3 days ago on PhilSci Archive
  11. 1499720.316609
    The idea that life is to be understood in terms of information has strongly taken hold in recent decades. I discuss two attempts to carry this through mathematically. G. J. Chaitin, co-founder of algorithmic information theory, proposes an information-theoretic definition of life in terms of organized complexity (Chaitin 1990a and 1990b). More recently, William Dembski, Winston Ewart, and Robert Marks have attempted to formulate in information-theoretic terms Dembski’s concept of specified complexity, using a mathematically hybrid entity they term “algorithmic specified complexity” (Ewart, Dembski, and Marks 2013a, 2014, 2015a, 2015b), and Dembski and Ewart have reformulated this concept in their newly revised edition (Dembski and Ewart 2023) of Dembski’s The Design Inference (Dembski 1998). The aim in both cases is to mathematically distinguish informational properties of biological complexity, in contrast to simple order, on the one hand, and mere randomness on the other. Moreover, the respective mathematical strategies are the same: To take an informational measure and subtract out its randomness, leaving a remainder of organization (Chaitin) or specified complexity (Dembski et al.).
    Found 2 weeks, 3 days ago on PhilSci Archive
  12. 1504106.316631
    Suppose I eat a chocolate bar and this causes me to have a trope of pleasure. Given assentiality of origins, if I had eaten a numerically different chocolate bar that caused the same pleasure, I would have had had a numerically different trope of pleasure. …
    Found 2 weeks, 3 days ago on Alexander Pruss's Blog
  13. 1522415.316649
    This entry explores six general approaches to the reconciliation of reason and religious commitment and the relations among them. Though it touches on the intellectual status of theism and the rational credentials of Christianity, these are not its central themes. Instead, it addresses the supernatural or transcendent reference that makes religious commitment at once philosophically interesting and philosophically problematic, examining the main ways of seeking to remove the latter feature without imperiling the former.
    Found 2 weeks, 3 days ago on Stanford Encyclopedia of Philosophy
  14. 1538037.316665
    A morality of recognition maintains that moral norms have their authority in virtue of enacting an ideal moral relationship. T.M. Scanlon has argued that this approach can yield an account of the distinctive force of morality and an attractive account of moral motivation. We find this approach to theorizing about morals extremely promising. However, Scanlon also offers a particular version of the moral relationship, based on the ideal of being justifiable to others. In this essay, we share five reasons why we find this particular relation unattractive. By exploring Scanlon’s own analogy with friendship, we offer an alternative moral relationship based on an ideal of living in caring solidarity with others as human. This alternative is more substantively attractive while retaining the overall benefits of the moral recognition approach. We end by pointing to a tension between Scanlon’s appeal to mutual recognition and the limited scope of contractualist morality.
    Found 2 weeks, 3 days ago on Barry Maguire's site
  15. 1542355.316685
    Linsky & Zalta (1994) argued that simplest quantified modal logic (SQML), with its fixed domain, can be given an actualist interpretation if the Barcan formula is interpreted to conditionally assert the existence of contingently nonconcrete objects. But SQML itself doesn’t require the existence of such objects; in interpretations of SQML in which there is only one possible world, there are no contingent objects, nonconcrete or otherwise. I defend an axiom for SQML that will provably (a) force the domain to have the relevant objects and thereby (b) force the existence of more than one possible world, thereby forestalling modal collapse. I show that the new axiom can be justified by describing the theorems that can be proved when it is added to SQML. I further justify the axiom by the reviewing the theorems the axiom allows us to prove when we assume object theory (‘OT’), in its latest incarnation, as a background framework. Finally, I consider the conclusions one can draw when we consider the new axiom in connection with actualism, as this view has been (re-)characterized in recent work.
    Found 2 weeks, 3 days ago on Ed Zalta's site
  16. 1570002.316701
    I’m talking about ‘causal loop diagrams’, which are graph with edges labeled by ‘polarities’. Often the polarities are simply and signs, like here: But polarities can be elements of any monoid, and last time I argued that things work even better if they’re elements of a rig, so you can not only multiply them but also add them. …
    Found 2 weeks, 4 days ago on Azimuth
  17. 1580102.316717
    Theodor W. Adorno (1903–1969) was one of the most important philosophers, cultural, and music critics in Germany after World War II. Although less well known among anglophone philosophers than many of his contemporaries, such as Hans-Georg Gadamer, Adorno had even greater influence on scholars and intellectuals in postwar Germany. In the 1960s he was the most prominent challenger to both Sir Karl Popper’s philosophy of science and Martin Heidegger’s philosophy of existence. Jürgen Habermas, Germany’s foremost social philosopher after 1970, was Adorno’s student and assistant. The current rise of national-populist and authoritarian politics has led to renewed interest in his work on the social psychology of the mass subject and anti-Semitism.
    Found 2 weeks, 4 days ago on Stanford Encyclopedia of Philosophy
  18. 1615100.316732
    It has been questioned why Wittgenstein wrote a significant amount on the foundations of probability in the Tractatus. In this paper I answer this by claiming that the primary aim of Wittgenstein’s account was to criticize a Keynesian theory of probability, and provide multiple pieces of evidence to demonstrate this. This then answers why Wittgenstein wrote such a significant amount on probability. He wrote it because it was salient at the time. Whilst Wittgenstein was at Cambridge there was significant discussion of probability by his philosophical interlocutors, particularly Keynes but also Russell, Moore and others. Wittgenstein thought he had the answers to the problems that were being discussed and set them out in the Tractatus.
    Found 2 weeks, 4 days ago on PhilSci Archive
  19. 1615130.316752
    The view that our best current physics deals with effective systems has gained philosophical traction in the last two decades. A similar view about open systems has also been picking up steam in recent years. Yet little has been said about how the concepts of effective and open systems relate to each other despite their apparent kinship—both indeed seem at first sight to presuppose that the system in question is somehow incomplete. In this paper, I distinguish between two concepts of effectiveness and openness in quantum field theory, which provides a remarkably well-developed theoretical framework to make a first stab at the matter, and argue that on both counts, every realistic effective system in this context is also open. I conclude by highlighting how the discussion opens novel avenues for thinking of systems as open across scales.
    Found 2 weeks, 4 days ago on PhilSci Archive
  20. 1618525.316771
    [#6 in my series of excerpts from Questioning Beneficence: Four Philosophers on Effective Altruism and Doing Good. This is my response to one of Jason Brennan’s essays, which asks whether we can exercise beneficence through business.] …
    Found 2 weeks, 4 days ago on Good Thoughts
  21. 1657636.316787
    Important Announcement: I don’t in any way endorse voting for Jill Stein, or any other third-party candidate. But if you are a Green Party supporter who lives in a swing state, then please at least vote for Harris, and use SwapYourVote.org to arrange for two (!) …
    Found 2 weeks, 5 days ago on Scott Aaronson's blog
  22. 1667807.316802
    In Part 1 I explained ‘causal loop diagrams’, which are graphs with edges labeled by polarities. These are a way to express qualitatively, rather than quantitatively, how entities affect one another. For example, here’s how causal loop diagrams us say that alcoholism ‘tends to increase’ domestic violence: We don’t need to specify any numbers, or even need to say what we mean by ‘tends to increase’, though that leads to the danger of using the term in a very loose way. …
    Found 2 weeks, 5 days ago on Azimuth
  23. 1672797.316818
    This paper consider a new and deeply challenging face of the problem of time in the context of cosmology drawing on the work of Thiemann (2006). Thiemann argues for a radical response to cosmic problem of time that requires us to modify the classical Friedmann equations. By contrast, we offer a conservative proposal for solution of the problem by bringing together ideas from the contemporary literature regarding reference frames (Bamonti 2023; Bamonti and Gomes 2024), complete observables (Gryb and Thébault 2016, 2023), and the model-based account of time measurement (Tal 2016). On our approach, we must reinterpret our criteria of observability in light of the clock hypothesis and the model-based account of measurement in order to preserve the Friedmann equations as the dynamical equations for the universe.
    Found 2 weeks, 5 days ago on PhilSci Archive
  24. 1672861.316837
    Despite significant advancements in XAI, scholars note a persistent lack of solid conceptual foundations and integration with broader scientific discourse on explanation. In response, emerging XAI research draws on explanatory strategies from various sciences and philosophy of science literature to fill these gaps. This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems, situating recent advancements in AI explainability within a broader philosophical context. According to the mechanistic approach, the explanation of opaque AI systems involves identifying mechanisms that drive decision-making. For deep neural networks, this means discerning functionally relevant components—such as neurons, layers, circuits, or activation patterns—and understanding their roles through decomposition, localization, and recomposition. Proof-of-principle case studies from image recognition and language modeling align these theoretical approaches with the latest research from AI labs like OpenAI and Anthropic. This research suggests that a systematic approach to studying model organization can reveal elements that simpler (or “more modest”) explainability techniques might miss, fostering more thoroughly explainable AI. The paper concludes with a discussion on the epistemic relevance of the mechanistic approach positioned in the context of selected philosophical debates on XAI.
    Found 2 weeks, 5 days ago on PhilSci Archive
  25. 1672890.316855
    This paper brings a species-inclusive, biologically grounded lens to the question, are there two and only two sexes? Insofar as the terms associated with sex are used to pick out taxa where reproduction is typically achieved through the fusion of two gametes of different sizes, the answer is yes. Insofar as the terms associated with sex are used to pick out morphs within a species the answer is often no, though the question is an empircal one and must be addressed species by species. Within our own species, where we have species-typical primary and secondary sex characteristics that typically align with gametic differences, there are many naturally occuring developmental differences that do not so align. Gender, though often confused with sex, is something else altogether, being a socio-cultural kind rather than a biological one. However, because the social roles and norms associated with a particular gender are imposed, typically, on the basis of a sex ascription, gender is frequently experienced as inextricably entwined with sex. Moreover, in cultural animals, the traits are frequently the result of the interactions between biological and social causes. I conclude that the idea that there are two and only two sexes in our own species and that gender can be reduced to secondary sex characteristics is clearly false.
    Found 2 weeks, 5 days ago on PhilSci Archive
  26. 1696016.316871
    Suppose that there is a simple majority election, with two candidates, and there is a large odd number of voters. Suppose polling data makes the election too close to call. How likely is it that you can decide which candidate wins? …
    Found 2 weeks, 5 days ago on Alexander Pruss's Blog
  27. 1740076.316888
    This is a progress report on some joint work with Xiaoyan Li, Nathaniel Osgood and Evan Patterson. Together with collaborators we have been developing software for ‘system dynamics’ modelling, and applying it to epidemiology—though it has many other uses. …
    Found 2 weeks, 6 days ago on Azimuth
  28. 1740077.316904
    It’s at the end of The Last Word by Thomas Nagel that fear of religion comes onstage. Capping a run of chapters with pleasingly-simple titles—Language, Logic, Science, and Ethics—the topic of the final chapter is an outburst of complexity: “Evolutionary Naturalism and the Fear of Religion.” This fear, Nagel asserts, has had “pernicious consequences for modern intellectual life.” While he opposes those consequences, he also understands them, because, Nagel admits, he is “strongly subject to this fear” himself. …
    Found 2 weeks, 6 days ago on Mostly Aesthetics
  29. 1820676.31692
    One of the few aphorisms about character not to appear in Marjorie Garber’s compendious book about the subject is due to Albert Camus: “when one has no character, one must apply a method.” Garber has both. …
    Found 3 weeks ago on Under the Net
  30. 1838719.316935
    The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems, and the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science. In reclaiming this older idea of AI, however, it is important not to repeat conceptual mistakes of the past (and present) that brought us to where we are today.
    Found 3 weeks ago on Iris van Rooij's site