-
3206307.286315
The meta-inductive approach to induction justifies induction by proving its optimality. The argument for the optimality of induction proceeds in two steps. The first ‘a priori’ step intends to show that meta-induction is optimal and the second ‘a posteriori’ step intends to show that meta-induction selects object-induction in our world. I critically evaluate the second-step and raise two problems: the identification problem and the indetermination problem. In light of these problems, I assess the prospects of any meta-inductive approach to induction.
-
3206329.286378
While causal models are introduced very much like a formal logical system, they have not yet been taken to the level of a proper logic of causal reasoning with structural equations. In this paper, we furnish causal models with a distinct deductive system and a corresponding model-theoretic semantics. Interventionist conditionals will be defined in terms of inferential relations in this logic of causal models.
-
3263984.286387
That science is value-dependent has been taken to raise problems for the democratic legitimacy of scientifically-informed public policy. An increasingly common solution is to propose that science itself ought to be ‘democratised.’ Of the literature aiming to provide principled means of facilitating such, most has been largely concerned with developing accounts of how public values might be identified in order to resolve scientific valuejudgements. Through a case-study of the World Health Organisation’s 2009 redefinition of ‘pandemic’ in response to H1N1, this paper proposes that this emphasis might be unhelpfully pre-emptive, pending more thorough consideration of the question of whose values different varieties of epistemic risk ought to be negotiated in reference to. A choice of pandemic definition inevitably involves the consideration of a particular variety of epistemic risk, described here as ontic risk. In analogy with legislative versus judicial contexts, I argue that the democratisation of ontic risk assessments could bring inductive risk assessments within the scope of democratic control without necessitating that those inductive risk assessments be independently subject to democratic processes. This possibility is emblematic of a novel strategy for mitigating the opportunity costs that successful democratisation would incur for scientists: careful attention to the different normative stakes of different epistemic risks can provide principled grounds on which to propose that the democratisation of science need only be partial.
-
3310462.286393
The self represents a multifactorial entity made up of several interrelated constructs. It is suggested that self-talk orchestrates interactions between most self-processes—especially those entailing self-reflection. A review of the literature is performed, specifically looking for representative studies (n = 12) presenting correlations between self-report measures of self-talk and self-reflective processes. Self-talk questionnaires include the Self-Talk Scale, the Varieties of Inner Speech Questionnaire, the General Inner Speech Questionnaire, and the Inner Speech Scale. The main self-reflection measures are the Rumination and Reflection Questionnaire, the Self-Consciousness Scale, and the Philadelphia Mindfulness Scale. Most measures comprise subscales which are also discussed. Findings include: (1) positive significant correlations between self-talk used for self-management/assessment and self-reflection, arguably because the latter entails self-regulation, which itself relies on self-directed speech; (2) positive significant correlations between critical self-talk and self-rumination, as both may recruit negative, repetitive, and uncontrollable self-thoughts; (3) negative associations between self-talk and the self-acceptance aspect of mindfulness, likely because thinking about oneself in the present in a non-judgmental way is best achieved by repressing one’s inner voice. Limitations are discussed, including the selective nature of the reported correlations. Experimentally manipulating self-talk would make it possible to further explore causal associations with self-processes.
-
3379398.286399
The de Broglie-Bohm pilot-wave theory asserts that a complete characterization of an N - particle system is given by its wave function together with the (at-all-times-defined) positions of the particles, with the wave function always satisfying the Schrödinger equation and the positions evolving according to the deterministic “guiding equation”. A complete agreement with the predictive apparatus of standard quantum mechanics, including the uncertainty principle and the probabilistic Born rule, is then said to emerge from these equations, without having to confer any special status to measurements or observers. Two key elements behind the proof of this complete agreement are absolute uncertainty and the POVM theorem. The former involves an alleged “naturally emerging, irreducible limitation on the possibility of obtaining knowledge within pilot-wave theory” and the latter establishes that the outcome distributions of all measurements are described by POVMs. Here, we argue that the derivations of absolute uncertainty and the POVM theorem depend upon the questionable assumption that “information is always configurationally grounded”. We explain in detail why the offered rationale behind such an assumption is deficient and explore the consequences of having to let go of it.
-
3379424.286404
In John Norton’s Material Theory of Induction, background facts provide the warrant for inductive inference and determine evidential relevance. Replication, however, is excluded as a principle of inductive logic. While Norton argues replication lacks the precision and methodological clarity to serve as a material principle of inference, I argue that replication nonetheless functions as an epistemic principle of induction. I examine how replication contributes to epistemic justification within both externalist and internalist frameworks and show that its role extends beyond procedural repetition. Replication acts as a reliable belief-forming process for identifying stable facts and inferences. This reframes MTI as a theory shaped not only by local facts but by how scientists determine which facts can function as background warrant.
-
3379475.286416
Daniel Dennett’s view about consciousness in nonhuman animals has two parts. One is a methodological injunction that we rely on our best theory of consciousness to settle that issue, a theory that must initially work for consciousness in humans. The other part is Dennett’s application of his own theory of consciousness, developed in Consciousness Explained (1991), which leads him to conclude that nonhuman animals are likely never in conscious mental states. I defend the methodological injunction as both sound and important, and argue that the alternative approaches that dominate the literature are unworkable. But I also urge that Dennett’s theory of consciousness and his arguments against conscious states in nonhuman animals face significant difficulties. Those difficulties are avoided by a higher-order-thought theory of consciousness, which is close to Dennett’s theory, and provides leverage in assessing which kinds of mental state are likely to be conscious in nonhuman animals. Finally, I describe a promising experimental strategy for showing that conscious states do occur in some nonhuman animals, which fits comfortably with the higher-order-thought theory but not with Dennett’s.
-
3379498.286422
Topological Data Analysis (TDA) is a relatively recent method of Data Analysis based on the mathematical theory of Persistent Homology [36], [30], [14]. TDA proved effective in various fields of data-driven research including Life sciences and the Biomedical research. As the popular idiom goes, TDA helps to identify the shape of data, which turns out to be in many ways informative. But what precisely one can possibly learn from such shapes? How this method applies across different scientific disciplines and practical tasks? Sarita Rosenstock in her recent article [42]provided a very valuable presentation of TDA for general philosophical readership and explored the above epistemological questions in general terms. The present work extends Rosenstock’s study in three different ways. First, it broadens the theoretical context of the discussion by bringing in some related epistemological problems concerning today’s data analysis and data-driven research 2. Second, it brings the epistemological discussion on TDA into a wider historical context by pointing to some relevant earlier developments in the pure and applied mathematics 3, 4. Finally, the present Chapter focuses on applications of TDA in the Biomedical research and tests theoretical epistemological conclusions obtained in the earlier sections of this work against some concrete examples 6.
-
3424952.286428
In Part 9 we saw, loosely speaking, that the theory of a hydrogen atom is equivalent to the theory of a massless left-handed spin-½ particle in the Einstein universe—a static universe where space is a 3-sphere. …
-
3425298.286433
We present an account of how idealised models provide scientific understanding that is based on the notion of stability: a model provides understanding of a target behaviour when both the model and the target’s perfect model are in a class of models over which that behaviour is stable. The class is characterised in terms of what we call the model’s noetic core, which contains the features that are indispensable to both the model’s and the target’s behaviour. The account is factivist because it insists that models must get those aspects of the target that it aims to understand right, but it disagrees with extant factivist accounts about how models achieve this.
-
3483503.286439
As part of the summer break, I’m publishing old essays that may be of interest for new subscribers. This post has been originally published March 27, 2024. If not already the case, do not hesitate to subscribe to receive free essays on economics, philosophy, and liberal politics in your mailbox! …
-
3551607.286444
Klaus: Sometimes how well or badly off you are at time t1 depends on what happens at a later time t2. A particularly compelling case of this is when at t1 you performedan onerous action with the goal of producing some effect E at t2. …
-
3551607.28645
- There is a “minimal humanly observable duration” (mhod) such that a human cannot have a conscious state—say, a pain—shorter than an mhod, but can have a conscious state that’s an mhod long. The “cannot” here is nomic possibility rather than metaphysical possibility. …
-
3552459.286455
Hypotheses about how and why animals behave the way they do are frequently labelled as either associative or cognitive. This has been taken as evidence that there is a fundamental distinction between two kinds of behavioural processes. However, there is significant disagreement about how to define this distinction whether it ought to be rejected entirely. Rather than seeking a definition of the associative-cognitive distinction, or advocating for its rejection, I argue that it is an artefact of the way that comparative psychologists generate hypotheses. I suggest that hypotheses for non-human animal behaviour are often generated by analogy with hypotheses drawn from human psychology and associative learning theory, a justifiable strategy since analogies help to establish the pursuit-worthiness of a hypothesis. Any apparent distinction is a misleading characterisation of what is a complex web of hypotheses that explain diverse behavioural phenomena. The analogy view of the distinction has three advantages. It motivates the apparent existence of the distinction based on a common inference strategy in science, analogical reasoning. It accounts for why the distinction has been difficult to articulate, because of the diversity of possible analogies. Finally, it delimits the role of the distinction in downstream inferences about animal behaviour.
-
3585353.286463
This commentary aims to support Tim Crane’s account of the structure of intentionality by showing how intentional objects are naturalistically respectable, how they pair with concepts, and how they are to be held distinct from the referents of thought. Crane is right to reject the false dichotomy that accounts of intentionality must be either reductive and naturalistic or non-reductive and traffic in mystery. The paper has three main parts. First, I argue that the notion of an intentional object is a phenomenological one, meaning that it must be understood as an object of thought for a subject. Second, I explain how Mark Sainsbury’s Display Theory of Attitude Attribution pairs well with Crane’s notion of an intentional object and allows for precisification in intentional state attributions while both avoiding exotica and capturing the subject’s perspective on the world. Third, I explain the reification fallacy, the fallacy of examining intentional objects as if they exist independently of subjects and their conceptions of them. This work helps to bring out how intentionality can fit in the natural world while at the same time not reducing aboutness to some non-intentional properties of the natural world.
-
3588568.286475
We say we believe that all children can learn, but few of us really believe it.” Lisa Delpit Teachers are expected to believe in the potential of every student in front of them. To believe otherwise is to give up on a central premise of the educational mission, that students can be taught. However, the people who come into the classroom have different levels of knowledge, skills, and motivations. To deny that what the student brings to the classroom matters to their potential progress is to deny empirical reality. Teachers face a tension between cultivating high expectations for student success and recognizing the limitations that a student and their circumstances impose.
-
3599852.28648
— We present a reformulation of the model predictive control problem using a Legendre basis. To do so, we use a Legendre representation both for prediction and optimization. For prediction, we use a neural network to approximate the dynamics by mapping a compressed Legendre representation of the control trajectory and initial conditions to the corresponding compressed state trajectory. We then reformulate the optimization problem in the Legendre domain and demonstrate methods for including optimization constraints. We present simulation results demonstrating that our implementation provides a speedup of 31-40 times for comparable or lower tracking errors with or without constraints on a benchmark task.
-
3610164.286486
Magnetic monopoles, hypothetical entities with isolated magnetic charges (Dirac) or effective charges from field configurations (’t Hooft-Polyakov), are posited to symmetrize electromagnetism and explain electric charge quantization, yet remain undetected. This paper demonstrates that such monopoles—Abelian Dirac and non-Abelian ’t Hooft-Polyakov—are incompatible with a potential-centric ontology, where the gauge potential Aµ, fixed in one true gauge, the Lorenz gauge, is the fundamental physical entity mediating local interactions, as evidenced by the Aharonov-Bohm effect. We derive a no-go result, showing that magnetic monopoles require singular (e.g., Dirac strings) or non-unique (e.g., Wu-Yang patches) potentials in all gauges to resolve a Stokes’ theorem contradiction, violating the ontology’s requirement for unique, non-singular potentials in the true gauge. This result extends to sphalerons in SU(2) × U(1) electroweak theory and D-branes in string theory, whose Ramond-Ramond potentials Cp+1 exhibit an AB-like effect but require singular or non-unique potentials due to non-zero flux, leading to a theoretical self-contradiction independent of experimental evidence. In contrast, cosmic strings, with a non-singular, single-valued Aµ in a single gauge, satisfying Stokes’ theorem and the ontology’s criteria.
-
3610187.286491
In their recent paper published in Nature , Sharaglazova et al. report an optical microcavity experiment yielding an “energy-speed relationship” for quantum particles in evanescent states, which they infer from the observed population transfer between two coupled waveguides. The authors argue that their findings challenge the validity of Bohmian particle dynamics because, according to the Bohmian guiding equation, the velocities in the classically forbidden region would be zero. In this note, we explain why this claim is false and the experimental findings are in perfect agreement with Bohmian mechanics. We also clarify why the operationally defined speeds reported in the paper are unrelated to particle velocities in the sense described by Bohmian mechanics. In contrast to other recent replies , our analysis relies solely on the standard Bohmian guidance equation for single particles.
-
3610267.286497
Penultimate version. Forthcoming in A. Drezet (ed.): Pilot-wave and beyond: Louis de Broglie and David Bohm’s quest for a quantum ontology, Foundations of Physics 2023. The paper explains why the de Broglie-Bohm theory reduces to Newtonian mechanics in the macroscopic classical limit. The quantum-to-classical transition is based on three steps: (i) interaction with the environment produces effectively factorized states, leading to the formation of effective wave functions and hence decoherence ; (ii) the effective wave functions selected by the environment–the pointer states of decoherence theory–will be well-localized wave packets, typically Gaussian states; (iii) the quantum potential of a Gaussian state becomes negligible under standard classicality conditions; therefore, the effective wave function will move according to Newtonian mechanics in the correct classical limit. As a result, a Bohmian system in interaction with the environment will be described by an effective Gaussian state and–when the system is macroscopic–it will move according to Newtonian mechanics.
-
3610296.286502
The paper advances the hypothesis that the multi-field is a determinable, that is, a physical object characterized by indeterminate values with respect to some properties. The multi-field is a realist interpretation of the wave function in quantum mechanics, specifically it interprets the wave function as a new physical entity in three-dimensional space: a “multi-field” (Hubert & Romano 2018; Romano 2021). The multi-field is similar to a field as it assigns determinate values to N-tuples of points, but is also different from a field as it does not assign pre-existing values at each point of three-dimensional space. In particular, the multi-field values corresponding to the empty points (points where no particles are located) have indeterminate values until a particle is located at those points. The paper suggests that the multi-field so defined can be precisely characterized in terms of determinable-based, object-level, account of metaphysical indeterminacy. Under this view, the multi-field as novel physical entity is, in fact, a metaphysically indeterminate quantum object, that is, a determinable.
-
3610319.286508
The explanatory structure of quantum mechanics and quantum gravity is marked by complementarity: the existence of distinct, mutually incompatible descriptions that are nonetheless each empirically valid in specific observational settings. In recent work, Ryoo (2025) proposed a context-dependent mapping framework ( ) as an epistemic tool to capture this phenomenon. This framework maps each physically ?? defined “context” to a set of laws that yield coherent and predictive explanations within that context. In this paper, I formally define the notion of “context” underlying the mapping, offer a general structural ?? typology, and present case studies from quantum gravity and entanglement wedge reconstruction to illustrate how explanatory fragmentation is grounded in physical theory rather than epistemic limitation.
-
3610942.286515
In my new paper, “Severe Testing: Error Statistics versus Bayes Factor Tests”, now out online at the The British Journal for the Philosophy of Science, I “propose that commonly used Bayes factor tests be supplemented with a post-data severity concept in the frequentist error statistical sense”. …
-
3654853.286525
Epistemologists have devoted an enormous amount of attention to justification. Hundreds of papers have tried to analyze the conditions under which a belief is epistemically justified; hundreds more have offered counterexamples to these analyses. Even epistemologists who look askance at conceptual analysis have found it fruitful to explore the connections between justification and other epistemic notions, such as knowledge, rationality, and evidence. Some have even suggested that justification is the central notion in epistemology.
-
3670590.28653
The Inscrutable Evidence Argument targets the thesis that credences are thoughts about evidential probabilities (CTEP). It does so using cases where one knows one’s evidence speaks either strongly in favor of or strongly against a proposition, but one doesn’t know which; in such cases, it seems possible to have a middling credence in that proposition even though one doesn’t think the probability of the proposition is near 50%—contra CTEP. In this paper, I defend CTEP by conceiving of the thoughts involved differently than usual. My diagnosis of the argument turns on appreciating the difference between believing and accepting (in the sense of Bratman 1992) that a proposition has probability n, where accepting is context dependent and allows for guidance in action without commitment to truth. I develop this diagnosis in two directions, one according to which acceptances of probability-involving propositions are credences and another according to which they aren’t. Both views elude the Inscrutable Evidence Argument and are compatible with CTEP.
-
3670612.286536
In a short note written in 1929, Frank Ramsey put forward a reliabilist account of knowledge anticipating those given by Armstrong (1973) and Goldman (1967), among others, a few decades later. Some think that the note comprises the bulk of what Ramsey has to say about epistemology. But Ramsey’s ideas about epistemology extend beyond the note. Relatively little attention has been paid to his reliabilist account of reasonable belief. Even less attention has been paid to his reliabilist account of reasonable degree of belief. In this paper, I spell out these aspects of Ramsey’s epistemology in more detail than has been done so far. I argue that Ramsey anticipates contemporary reliabilist accounts of justified belief and justified degree of belief. I also flesh out Ramsey’s reasons for being a reliabilist. This is worth doing since Ramsey has one of the earliest arguments for reliabilism, but it has received scarce attention. Also, Ramsey calls his reliabilism “a kind of pragmatism,” and examining the argument will help us clarify Ramsey’s pragmatist commitments and better understand his version of reliabilism. I argue that when viewed through contemporary lenses, Ramsey’s reliabilism contains revisionist elements: he’s not opposed to what we now call “conceptual engineering.”
-
3670661.286541
Emotions can get things right and serve us in many productive ways. They can also get things wrong and harm our epistemic or practical endeavors. Resenting somebody for having insulted your friend gets it wrong when your friend well understood that the remark was a joke. On the other hand, if your friend is not familiar with the given cultural context and hence couldn’t quite grasp the subtly sexist nature of the joke, your resentment might not only be appropriate but also help her navigate the new social context. Hoping that your meeting with your supervisor will be productive might motivate you to prepare better but will be inappropriate if all your previous meetings were failures.
-
3670689.286547
Besides disagreeing about how much one should donate to charity, moral theories also disagree about where one should donate. In many cases, one intuitively attractive option is to split your donations across all of the charities that are recommended by theories in which you have positive credence, with each charity’s share being proportional to your credence in the theories that recommend it. Despite the fact that something like this approach is already widely used by real-world philanthropists to distribute billions of dollars, it is not supported by any account of handling decisions under moral uncertainty that has been proposed thus far in the literature. This paper develops a new bargaining-based approach that honors the proportionality intuition. We also show how this approach has several advantages over the best alternative proposals.
-
3670711.286552
In the last half-century increased awareness of modal issues has been brought to bear on the free will debate. It has been argued that the context dependence of possibility claims can be exploited to mount a defence of compatibilism, the idea being that the kind of possibility to do otherwise ruled out by determinism is distinct from the kind of possibility to do otherwise needed for free will. The potency of this idea, however, is still under-appreciated. It is often confused with conditional analyses of alternative possibilities, and many assume that the forms of possibility the compatibilist points to are somehow less “categorical” than the incompatibilist’s preferred all-in possibility. Moreover, Christian List’s questionable agent-level compatibilism has recently become the main representative of the idea. In fact what is needed—so it is argued here—is to combine increased modal awareness with the traditional compatibilist picture of the relevant freedom being freedom from external compulsion.
-
3670763.286561
Contractual inflationists claim that contractual relationships are a source of noninstrumental value in our lives, to be engaged with for their own sake. Some inflationists take this to be the value of “personal detachment.” I argue that though personal detachment can indeed be valuable, that value is not plausibly considered noninstru-mental. Even on the most charitable reading of personal detachment—its potential to emancipate us from traditional social relations—these inflationists overlook that it may just as much lead to domination as traditional society does, only this time, due to alienation under market conditions. To salvage our intuitive sense of the emancipatory potential of contract, we can consider the detachment it makes possible to be a form of technology, casting the value of contract in a “merely” instrumental role. I conclude that if we are to reinvigorate the politics of the appeal to personal detachment in contract theory, we have to deflate its value.