-
1361825.740156
(1) My 8-year-old son asked me last week, “daddy, did you hear that GPT-5 is now out?” So yes, I’m indeed aware that GPT-5 is now out! I’ve just started playing around with it. For detailed reports on what’s changed and how impressive it is compared to previous models, see for example Zvi #1, #2, #3. …
-
1363900.740313
Battisti argues that it is morally problematic to use AI tools for improving the quality of a message sent to a romantic partner as it may no longer authentically reflect one’s personality. If AI is used in this manner, there is a risk that what Battisti refers to as an “authenticity-based obligation” is violated. According to Battisti, authenticity-based obligations are nontransferable because they are inherently tied to specific people. […] the value of the result lies in the person performing the task, that is, in who undertakes the cognitive and emotional process required to bring it about1 While we find the discussion of authenticity-based obligations interesting, we doubt that this is the right criterion to apply in this context, for at least four reasons.
-
1368889.740329
I present a heretofore untheorised form of lay science, called extitutional science, whereby lay scientists, by virtue of their collective experience, are able to detect errors committed by institutional scientists and attempt to have them corrected. I argue that the epistemic success of institutional science is enhanced to the extent that it takes up this extitutional criticism. Since this uptake does not occur spontaneously, extitutional interference in the conduct of institutional science is required. I make a proposal for how to secure this epistemically beneficial form of lay interference.
-
1368942.740352
We re-examine the old question to what extent mathematics may be compared with a game. Mainly inspired by Hilbert and Wittgenstein, our answer is that mathematics is something like a “rhododendron of language games”, where the rules are inferential. The pure side of mathematics is essentially formalist, where we propose that truth is not carried by theorems corresponding to whatever independent reality and arrived at through proof, but is defined by correctness of rule-following (and as such is objective given these rules). Gödel’s theorems, which are often seen as a threat to formalist philosophies of mathematics, actually strengthen our concept of truth. The applied side of mathematics arises from two practices: first, the dual nature of axiomatization as taking from heuristic practices like physics and informal mathematics whilst giving proofs and logical analysis; and second, the ability of using the inferential role of theorems to make “surrogative” inferences about natural phenomena. Our framework is pluralist, combining various (non-referential) philosophies of mathematics.
-
1369104.740363
This paper proposes an alternative to standard first-order logic that seeks greater naturalness, generality, and semantic self-containment. The system removes the first-order restriction, avoids type hierarchies, and dispenses with external structures, making the meaning of expressions depend solely on their constituent symbols. Terms and formulas are unified into a single notion of expression, with set-builder notation integrated as a primitive construct. Connectives and quantifiers are treated as operators among others rather than as privileged primitives. The deductive framework is minimal and intuitive, with soundness and consistency established and completeness examined. While computability requirements may limit universality, the system offers a unified and potentially more faithful model of human mathematical deduction, providing an alternative foundation for formal reasoning.
-
1373051.740373
I often find myself thinking that the conventional wisdom in moral philosophy gets a lot of things backwards. For example, I’ve previously discussed how deontology is much more deeply self-effacing (making objectively right actions, and not just bungled attempts to act rightly, lamentable) than consequentialism. …
-
1400732.740383
Christopher Devlin Brown’s The Hope and Horror of Physicalism works through different ways of understanding the content of physicalism, evaluates the “existential consequences” of physicalism so understood, and attempts to defend one form of physicalism – “Russellian physicalism” – from consciousness-based objections. I first raise some minor-but-not-too-minor concerns about Brown’s historical account of physicalism. Second, I discuss one version of physicalism (the “theory-based version”) that Brown works with in assessing physicalism’s existential consequences. Third, I raise some questions about Brown’s preferred way of understanding physicalism, which he labels “Russellian physicalism”, and which is a version of “via negativa physicalism”. My discussions are offered in a constructive spirit.
-
1479945.740392
Some important policies will change future mortality rates (like climate mitigation), change future fertility rates (like public education), or respond to the emerging challenges of global depopulation. Any such policy will change each of the quality of lives, the quantity of lives, and who will live in the future. Hence, to evaluate economic policies, we need to assess both social risk and variable population. A standard principle for economic policy evaluation is Expected Total Utilitarianism, which maximizes the expected value of the sum of individuals’ transformed lifetime well-being. Despite the prominent use in public economics of both additive utilitarianism and expectation-taking under risk, these methods remain questionable in welfare economics, in part because existing axiomatic justifications make strong assumptions (Fleurbaey, 2010; Golosov et al., 2007).
-
1483763.740413
To celebrate my sons’ graduation from Vanderbilt, I commissioned a custom set of game chips, using images drawn from the bespoke role-playing games we’ve been playing since they were three years old. Since I wanted top quality and consistency, I didn’t use AI. …
-
1484441.740423
Recent work on the philosophy of high energy physics experiments has considerably advanced our understanding of their epistemology, for instance concerning measurements by the ATLAS collaboration at the large hadron collider (Beauchemin 2017). In this paper we aim to highlight and analyze complementary low energy ‘tabletop’ experiments in particle (and other kinds of fundamental) physics. In particular, we contrast ATLAS measurements with high precision measurements of the electron magnetic moment. We find, for instance, that the simplicity of the latter experiment allows for uncertainties to be minimized materially, in the very construction of the apparatus. We also sketch how a notion of ‘frugality’ can be used, in light of considerations of simplicity, to understand the value of low energy experiments with respect to the entrenched field of high energy experiment.
-
1484485.740433
In a recent paper, Harriet Fagerberg argues that the disease debate in the philosophy of medicine makes little sense as conceptual analysis but instead should proceed on the assumption that disease is a real kind. I propose an alternative view. The history and practice of medicine give us reasons to doubt that the category of disease forms a real kind. Instead, drawing on work by Quill R. Kukla, I argue that the disease debate makes good sense on an understanding of disease as an institutional kind. As well as explaining key features of the disease debate, this can facilitate a philosophical understanding of disease that captures the eclectic scope of medicine and the complex reasons why conditions get classified as diseases.
-
1484513.740442
We explore the causes and outcomes of scientific conceptual change using a case study of the development of the individualized niche concept. We outline a framework for characterizing conceptual change that distinguishes between epistemically adaptive and neutral processes and outcomes of conceptual change. We then apply this framework in tracing how the individualized niche concept arose historically out of population niche thinking and how it exhibits plurality within a contemporary biological research program. While the individualized niche concept was developed adaptively to suit new research goals and empirical findings, some of its pluralistic aspects in contemporary research may have arisen neutrally, that is for non-epistemic reasons. We suggest reasons for thinking that this plurality is unproblematic and may become useful, e.g., when it allows for the concept to be applied across differing research contexts.
-
1484534.740453
Scientific metaphysics can inform discussions of scientific representation in a number of ways. For instance, even a relatively generic commitment to some minimal form of scientific realism suggests that the targets of scientific representations should serve as source material for one’s scientifically-informed ontology. Historical connections between commitments to realism and commitments to reductive approaches in scientific metaphysics further inform a persistent strain of reductive approach to generating scientific representations. In this discussion, I examine two recent challenges to reductive scientific metaphysics from philosophers working across a variety of scientific domains and philosophical traditions: C. Kenneth Waters’ “No General Structure Thesis” and Robert Batterman’s account of scientific metaphysics built on many-body physics. Each of these accounts has what I shall call “anti-fundamentalist” leanings: they reject the premise that fundamental physical theory is the appropriate or best source material for scientific metaphysics. Following Waters, I contrast these leanings with the methodological approach of contemporary structural realism. Additionally, both Waters’ and Batterman’s accounts foreground the role of scale in defining ontological categories, and both reject the reductionist ideal that the stuff at the smallest scale is the most fundamental, the most general, or the most real. I discuss the implications for scientific representation imparted by anti-fundamentalist approaches that emphasize the role of scale in building a scientifically-informed ontology.
-
1484555.740474
The meta-inductive approach to induction justifies induction by proving its optimality. The argument for the optimality of induction proceeds in two steps. The first ‘a priori’ step intends to show that meta-induction is optimal and the second ‘a posteriori’ step intends to show that meta-induction selects object-induction in our world. I critically evaluate the second-step and raise two problems: the identification problem and the indetermination problem. In light of these problems, I assess the prospects of any meta-inductive approach to induction.
-
1484577.740488
While causal models are introduced very much like a formal logical system, they have not yet been taken to the level of a proper logic of causal reasoning with structural equations. In this paper, we furnish causal models with a distinct deductive system and a corresponding model-theoretic semantics. Interventionist conditionals will be defined in terms of inferential relations in this logic of causal models.
-
1542232.740498
That science is value-dependent has been taken to raise problems for the democratic legitimacy of scientifically-informed public policy. An increasingly common solution is to propose that science itself ought to be ‘democratised.’ Of the literature aiming to provide principled means of facilitating such, most has been largely concerned with developing accounts of how public values might be identified in order to resolve scientific valuejudgements. Through a case-study of the World Health Organisation’s 2009 redefinition of ‘pandemic’ in response to H1N1, this paper proposes that this emphasis might be unhelpfully pre-emptive, pending more thorough consideration of the question of whose values different varieties of epistemic risk ought to be negotiated in reference to. A choice of pandemic definition inevitably involves the consideration of a particular variety of epistemic risk, described here as ontic risk. In analogy with legislative versus judicial contexts, I argue that the democratisation of ontic risk assessments could bring inductive risk assessments within the scope of democratic control without necessitating that those inductive risk assessments be independently subject to democratic processes. This possibility is emblematic of a novel strategy for mitigating the opportunity costs that successful democratisation would incur for scientists: careful attention to the different normative stakes of different epistemic risks can provide principled grounds on which to propose that the democratisation of science need only be partial.
-
1588710.740512
The self represents a multifactorial entity made up of several interrelated constructs. It is suggested that self-talk orchestrates interactions between most self-processes—especially those entailing self-reflection. A review of the literature is performed, specifically looking for representative studies (n = 12) presenting correlations between self-report measures of self-talk and self-reflective processes. Self-talk questionnaires include the Self-Talk Scale, the Varieties of Inner Speech Questionnaire, the General Inner Speech Questionnaire, and the Inner Speech Scale. The main self-reflection measures are the Rumination and Reflection Questionnaire, the Self-Consciousness Scale, and the Philadelphia Mindfulness Scale. Most measures comprise subscales which are also discussed. Findings include: (1) positive significant correlations between self-talk used for self-management/assessment and self-reflection, arguably because the latter entails self-regulation, which itself relies on self-directed speech; (2) positive significant correlations between critical self-talk and self-rumination, as both may recruit negative, repetitive, and uncontrollable self-thoughts; (3) negative associations between self-talk and the self-acceptance aspect of mindfulness, likely because thinking about oneself in the present in a non-judgmental way is best achieved by repressing one’s inner voice. Limitations are discussed, including the selective nature of the reported correlations. Experimentally manipulating self-talk would make it possible to further explore causal associations with self-processes.
-
1657646.740522
The de Broglie-Bohm pilot-wave theory asserts that a complete characterization of an N - particle system is given by its wave function together with the (at-all-times-defined) positions of the particles, with the wave function always satisfying the Schrödinger equation and the positions evolving according to the deterministic “guiding equation”. A complete agreement with the predictive apparatus of standard quantum mechanics, including the uncertainty principle and the probabilistic Born rule, is then said to emerge from these equations, without having to confer any special status to measurements or observers. Two key elements behind the proof of this complete agreement are absolute uncertainty and the POVM theorem. The former involves an alleged “naturally emerging, irreducible limitation on the possibility of obtaining knowledge within pilot-wave theory” and the latter establishes that the outcome distributions of all measurements are described by POVMs. Here, we argue that the derivations of absolute uncertainty and the POVM theorem depend upon the questionable assumption that “information is always configurationally grounded”. We explain in detail why the offered rationale behind such an assumption is deficient and explore the consequences of having to let go of it.
-
1657672.740531
In John Norton’s Material Theory of Induction, background facts provide the warrant for inductive inference and determine evidential relevance. Replication, however, is excluded as a principle of inductive logic. While Norton argues replication lacks the precision and methodological clarity to serve as a material principle of inference, I argue that replication nonetheless functions as an epistemic principle of induction. I examine how replication contributes to epistemic justification within both externalist and internalist frameworks and show that its role extends beyond procedural repetition. Replication acts as a reliable belief-forming process for identifying stable facts and inferences. This reframes MTI as a theory shaped not only by local facts but by how scientists determine which facts can function as background warrant.
-
1657723.740543
Daniel Dennett’s view about consciousness in nonhuman animals has two parts. One is a methodological injunction that we rely on our best theory of consciousness to settle that issue, a theory that must initially work for consciousness in humans. The other part is Dennett’s application of his own theory of consciousness, developed in Consciousness Explained (1991), which leads him to conclude that nonhuman animals are likely never in conscious mental states. I defend the methodological injunction as both sound and important, and argue that the alternative approaches that dominate the literature are unworkable. But I also urge that Dennett’s theory of consciousness and his arguments against conscious states in nonhuman animals face significant difficulties. Those difficulties are avoided by a higher-order-thought theory of consciousness, which is close to Dennett’s theory, and provides leverage in assessing which kinds of mental state are likely to be conscious in nonhuman animals. Finally, I describe a promising experimental strategy for showing that conscious states do occur in some nonhuman animals, which fits comfortably with the higher-order-thought theory but not with Dennett’s.
-
1657746.740553
Topological Data Analysis (TDA) is a relatively recent method of Data Analysis based on the mathematical theory of Persistent Homology [36], [30], [14]. TDA proved effective in various fields of data-driven research including Life sciences and the Biomedical research. As the popular idiom goes, TDA helps to identify the shape of data, which turns out to be in many ways informative. But what precisely one can possibly learn from such shapes? How this method applies across different scientific disciplines and practical tasks? Sarita Rosenstock in her recent article [42]provided a very valuable presentation of TDA for general philosophical readership and explored the above epistemological questions in general terms. The present work extends Rosenstock’s study in three different ways. First, it broadens the theoretical context of the discussion by bringing in some related epistemological problems concerning today’s data analysis and data-driven research 2. Second, it brings the epistemological discussion on TDA into a wider historical context by pointing to some relevant earlier developments in the pure and applied mathematics 3, 4. Finally, the present Chapter focuses on applications of TDA in the Biomedical research and tests theoretical epistemological conclusions obtained in the earlier sections of this work against some concrete examples 6.
-
1703200.740562
In Part 9 we saw, loosely speaking, that the theory of a hydrogen atom is equivalent to the theory of a massless left-handed spin-½ particle in the Einstein universe—a static universe where space is a 3-sphere. …
-
1703546.740572
We present an account of how idealised models provide scientific understanding that is based on the notion of stability: a model provides understanding of a target behaviour when both the model and the target’s perfect model are in a class of models over which that behaviour is stable. The class is characterised in terms of what we call the model’s noetic core, which contains the features that are indispensable to both the model’s and the target’s behaviour. The account is factivist because it insists that models must get those aspects of the target that it aims to understand right, but it disagrees with extant factivist accounts about how models achieve this.
-
1761751.740581
As part of the summer break, I’m publishing old essays that may be of interest for new subscribers. This post has been originally published March 27, 2024. If not already the case, do not hesitate to subscribe to receive free essays on economics, philosophy, and liberal politics in your mailbox! …
-
1829855.740591
Klaus: Sometimes how well or badly off you are at time t1 depends on what happens at a later time t2. A particularly compelling case of this is when at t1 you performedan onerous action with the goal of producing some effect E at t2. …
-
1829855.7406
- There is a “minimal humanly observable duration” (mhod) such that a human cannot have a conscious state—say, a pain—shorter than an mhod, but can have a conscious state that’s an mhod long. The “cannot” here is nomic possibility rather than metaphysical possibility. …
-
1830707.74061
Hypotheses about how and why animals behave the way they do are frequently labelled as either associative or cognitive. This has been taken as evidence that there is a fundamental distinction between two kinds of behavioural processes. However, there is significant disagreement about how to define this distinction whether it ought to be rejected entirely. Rather than seeking a definition of the associative-cognitive distinction, or advocating for its rejection, I argue that it is an artefact of the way that comparative psychologists generate hypotheses. I suggest that hypotheses for non-human animal behaviour are often generated by analogy with hypotheses drawn from human psychology and associative learning theory, a justifiable strategy since analogies help to establish the pursuit-worthiness of a hypothesis. Any apparent distinction is a misleading characterisation of what is a complex web of hypotheses that explain diverse behavioural phenomena. The analogy view of the distinction has three advantages. It motivates the apparent existence of the distinction based on a common inference strategy in science, analogical reasoning. It accounts for why the distinction has been difficult to articulate, because of the diversity of possible analogies. Finally, it delimits the role of the distinction in downstream inferences about animal behaviour.
-
1863601.740621
This commentary aims to support Tim Crane’s account of the structure of intentionality by showing how intentional objects are naturalistically respectable, how they pair with concepts, and how they are to be held distinct from the referents of thought. Crane is right to reject the false dichotomy that accounts of intentionality must be either reductive and naturalistic or non-reductive and traffic in mystery. The paper has three main parts. First, I argue that the notion of an intentional object is a phenomenological one, meaning that it must be understood as an object of thought for a subject. Second, I explain how Mark Sainsbury’s Display Theory of Attitude Attribution pairs well with Crane’s notion of an intentional object and allows for precisification in intentional state attributions while both avoiding exotica and capturing the subject’s perspective on the world. Third, I explain the reification fallacy, the fallacy of examining intentional objects as if they exist independently of subjects and their conceptions of them. This work helps to bring out how intentionality can fit in the natural world while at the same time not reducing aboutness to some non-intentional properties of the natural world.
-
1866816.740635
We say we believe that all children can learn, but few of us really believe it.” Lisa Delpit Teachers are expected to believe in the potential of every student in front of them. To believe otherwise is to give up on a central premise of the educational mission, that students can be taught. However, the people who come into the classroom have different levels of knowledge, skills, and motivations. To deny that what the student brings to the classroom matters to their potential progress is to deny empirical reality. Teachers face a tension between cultivating high expectations for student success and recognizing the limitations that a student and their circumstances impose.
-
1878100.740644
— We present a reformulation of the model predictive control problem using a Legendre basis. To do so, we use a Legendre representation both for prediction and optimization. For prediction, we use a neural network to approximate the dynamics by mapping a compressed Legendre representation of the control trajectory and initial conditions to the corresponding compressed state trajectory. We then reformulate the optimization problem in the Legendre domain and demonstrate methods for including optimization constraints. We present simulation results demonstrating that our implementation provides a speedup of 31-40 times for comparable or lower tracking errors with or without constraints on a benchmark task.