-
10525.270138
Through a series of empirical studies involving native speakers of English, German, and Chinese, this paper reveals that the predicate “true” is inherently ambiguous in the empirical domain. Truth statements such as “It is true that Tom is at the party” seem to be ambivalent between two readings. On the first reading, the statement means “Reality is such that Tom is at the party.” On the second reading, the statement means “According to what X believes, Tom is at the party.” While there appear to exist some cross-cultural differences in the interpretation of the statements, the overall findings robustly indicate that “true” has multiple meanings in the realm of empirical matters.
-
10545.270236
Semantic features are components of concepts. In philosophy, there is a predominant focus on those features that are necessary (and jointly sufficient) for the application of a concept. Consequently, the method of cases has been the paradigm tool among philosophers, including experimental philosophers. However, whether a feature is salient is often far more important for cognitive processes like memory, categorization, recognition and even decision-making than whether it is necessary. The primary objective of this paper is to emphasize the significance of researching salient features of concepts. I thereby advocate the use of semantic feature production tasks, which not only enable researchers to determine whether a feature is salient, but also provide a complementary method for studying ordinary language use. I will discuss empirical data on three concepts, conspiracy theory, female/male professor, and life, to illustrate that semantic feature production tasks can help philosophers (a) identify those salient features that play a central role in our reasoning about and with concepts, (b) examine socially relevant stereotypes, and (c) investigate the structure of concepts.
-
30218.270249
Hume [Hume 1739: bk.I pt.III sec.XI] held, incredibly, that objective chance is a projection of our beliefs. Bruno de Finetti [1970] gave mathematical substance to this idea. Scientific reasoning about chance, he argued, should be understood as arising from symmetries in degrees of belief. De Finetti’s gambit is popular in some quarters of statistics and philosophy – see, for example, [Bernardo and Smith 2009], [Spiegelhalter 2024], [Skyrms 1984: ch.3], [Diaconis and Skyrms 2017: ch.7], [Jeffrey 2004]. It is safe to say, however, that it has not been widely accepted. Science textbooks generally ignore it. So does the excellent Stanford Encyclopedia entry on “Interpretations of Probability” [Hájek 2023].
-
32833.270259
Traditional arguments against or in favor of continuity rely upon the presupposition that scientific theories can serve as markers of descriptive truth. I argue that such a notion of the term is misguided if we are concerned with the question of how our scientific schemes ought to develop . Instead, a reconstruction of the term involves identifying those concepts which guide the development from one successive scheme to the next and labelling those concepts with the status that they are continuous. I explicitly construct an example of this kind of continuity utilizing two formulations of Quantum Field Theory (QFT) and identify what persists from the standard formulation, beginning with an action, to the successive one, making use of spinor helicity variables. Three concepts persist which are responsible for supplying explicit constraints on our expressions which serve to match onto empirical predictions: Lorentz invariance, locality and unitarity. Further extensions of this kind of analysis to models beyond the physical sciences are proposed.
-
32850.270269
The extravagances of quantum mechanics (QM) never fail to enrich daily the debate around natural philosophy. Entanglement, non-locality, collapse, many worlds, many minds, and subjectivism have challenged generations of thinkers. Its approach can perhaps be placed in the stream of quantum logic, in which the “strangeness” of quantum mechanics is “measured” through the violation of Bell’s inequalities and, from there, attempts an interpretative path that preserves realism yet ends up overturning it, restating the fundamental mechanisms of QM as a logical necessity for a strong realism.
-
32867.270279
Quantum mechanics is a theory that is as effective as it is counterintuitive. While quantum practices operate impeccably, they compel us to embrace enigmatic phenomena like the collapse of the state vector and non-locality, thereby pushing us towards untenable ”hypotheses non fingo” stances. However, a century after its inception, we are presented with a promising interpretive key, intimated by Wheeler as early as 1974[ ]. The interpretative paradoxes of this theory might be resolved if we discern the relationship between logical undecidability and quantum undecidability. It will be demonstrated how both are intricately linked to an observer/observed relational issue, and how the idiosyncratic behaviours of quantum physics can be reconciled with the normative, following this path.
-
32911.270288
In epidemiology, an effect of a dichotomous exposure on a dichotomous outcome is a comparison of risks between the exposed and the unexposed. Causally interpreted, this comparison is assumed to equal a comparison in counterfactual risks if, hypothetically, both exposure states were to occur at once for each subject (Hernán and Robins, 2020). These comparisons are summarized by effect measures like risk difference or risk ratio. Risk difference describes the additive influence of an exposure on an outcome, and is often called an absolute effect measure. Trials occasionally report the inverse of a risk difference, which can also be classified as an absolute measure, as inverting it again returns the risk difference. Measures like risk ratio, which describe a multiplier of risk, are called relative, or ratio measures.
-
32929.270297
I argue that forays into history of science in Kuhn’s The Structure of Scientific Revolutions (1962/1996) are by and large instances of “Great Man” history of science. “Great Man” history is the idea that history is the biography of great men. The “Great Man” of science model not only excludes women and people of color from science but also suggests that only special, exceptional people can succeed in science. If this is correct, then Kuhn (1962/1996) fails to usher in a “historiographic revolution in the study of science” or a “new historiography” (Kuhn 1962/1996, 3), as the book purports to do. Instead, it merely perpetuates the defunct historiography of the “Great Man” of science.
-
37223.270308
Suppose for simplicity that everyone is a good Bayesian and has the same priors for a hypothesis H, and also the same epistemic interests with respect to H. I now observe some evidence E relevant to H. My credence now diverges from everyone else’s, because I have new evidence. …
-
37224.270318
Suppose that your priors for some hypothesis H are 3/4 while my priors for it are 1/2. I now find some piece of evidence E for H which raises my credence in H to 3/4 and would raise yours above 3/4. If my concern is for your epistemic good, should I reveal this evidence E? …
-
48718.270341
Next month, I’m speaking at Natal-Con in Austin. The line-up is a who’s who of thinkers advocating more births: the Collinses, Lyman Stone, Catherine Pakaluk, Jonathan Anomaly, Razib Khan, Crémieux, Robin Hanson, and many more. …
-
80185.270351
One of the solutions of the measurement problem is given by spontaneous localization theories, in which a non-linear and stochastic dynamics makes superpositions spontaneously and randomly ‘decay’ into well localized states. In this paper I discuss the original spontaneous localization theory as well as its subsequent refinements. Also, I present their possible ontologies and their relativistic extension, analyzing whether it is the case that spontaneous localization theories are more compatible with relativity than their alternatives. A notable feature of these approaches is that they make predictions which differ from the ones of axiomatic quantum theory, and thus they can be empirically tested. I conclude the paper by considering the problem of the tails and the question of whether GRW can provide a better ground for a statistical mechanical explanation of the phenomena.
-
134267.27036
Jeremy Kuhn, Carlo Geraci, Philippe Schlenker, Brent Strickland. Boundaries in space and time: Iconic biases across modalities. Cognition, 2021, 210, pp.104596. �10.1016/j.cognition.2021.104596�.
-
144941.270369
Bet On It reader Dan Barrett wrote these notes for his Book Nook book club on my Selfish Reasons to Have More Kids: Why Being a Great Parent Is Less Work and More Fun Than You Think. Dan’s idea:
I’m organizing reading groups packaged as the Book Nook to help colleagues (1) guide their own learning journeys, (2) connect with people they’d otherwise not meet, & (3) deepen their understanding of the Principles of Human Progress. …
-
148257.270379
The inductive risk argument challenges the value-free ideal of science by asserting that scientists should manage the inductive risks involved in scientific inference through social values, which consists in weighing the social implications of errors when setting evidential thresholds. Most of the previous analyses of the argument fall short of engaging directly with its core assumptions, and thereby offer limited criticisms. This paper critically examines the two key premises of the inductive risk argument: the thesis of epistemic insufficiency, which asserts that the internal standards of science do not suffice to determine evidential thresholds in a non-arbitrary fashion, and the thesis of legitimate value-encroachment, which asserts that non-scientific value judgments can justifiably influence these thresholds. A critical examination of the first premise shows that the inductive risk argument does not pose a unique epistemic challenge beyond what is already implied by fallibilism about scientific knowledge, and fails because the mere assumption of fallibilism does not imply the untenability of value-freedom. This is demonstrated by showing that the way in which evidential thresholds are set in science is not arbitrary in any sense that would lend support to the inductive risk argument. A critical examination of the second premise shows that incorporating social values into scientific inference as an inductive risk-management strategy faces a meta-criterion problem, and consequently leads to several serious issues such as wishful thinking, category mistakes in decision making, or Mannheim-style paradoxes of justification. Consequently, value-laden strategies for inductive risk management in scientific inference would likely weaken the justification of scientific conclusions in most cases.
-
148277.270394
Scientific principles can undergo various developments. While philosophers of science have acknowledged that such changes occur, there is no systematic account of the development of scientific principles. Here we propose a template for analyzing the development of scientific principles called the ‘life cycle’ of principles. It includes a series of processes that principles can go through: prehistory, elevation, formalization, generalization, and challenge. The life cycle, we argue, is a useful heuristic for the analysis of the development of scientific principles. We illustrate this by discussing examples from foundational physics including Lorentz invariance, Mach’s principle, the naturalness principle, and the perfect cosmological principle. We also explore two applications of the template. First, we propose that the template can be employed to diagnose the quality of scientific principles. Second, we discuss the ramifications of the life cycle’s processes for the empirical testability of principles.
-
179047.270405
FAQ on Microsoft’s topological qubit thing
Q1. Did you see Microsoft’s announcement? A. Yes, thanks, you can stop emailing to ask! Microsoft’s Chetan Nayak was even kind enough to give me a personal briefing a few weeks ago. …
-
205924.270414
In this paper, we provide a critical overview of Feyerabend’s unpublished manuscript “On the Responsibility of Scientists.” Specifically, we locate the paper within Feyerabend’s corpus and show how it relates to his published remarks on topics such as expertise, democracy and science, opportunism, science funding, and the value of scientific knowledge. We also show how Feyerabend’s views anticipate and point novel directions for contemporary philosophical literature on values in science.
-
205943.270425
This work shows that the ontic-epistemic dichotomy is insufficient to capture the different levels of ignorance and their implications for probability theories. It proposes an essentially epistemic interpretation of quantum mechanics, built on an operational basis firmly anchored to experimental data and scientific methods. This approach enables a rigorous treatment of numerical values obtained from experiments without resorting to unnecessary ontological or metaphysical assumptions.
-
205962.270434
The standard formalism of quantum theory is derived by analyzing the behavior of single-variable physical systems. These systems, which have a minimal information capacity of only one piece of information, exhibit indeterministic behavior under independent measurements but can be described probabilistically for dependent measurements. By enforcing the principle of probability conservation in the transformations of outcome probabilities across various measurement scenarios, we derive the core components of standard quantum theory, including the Born rule, the Hilbert space structure, and the Schrödinger equation. Furthermore, we demonstrate that the requirements for conducting quantum experiments – specifically, preparing physical systems in coherent states – effectively reduce the number of independent variables to one, thereby transforming these systems into single-variable ones in practice. This completes our first-principles, information-theoretic derivation of quantum theory as the physics of single-variable physical systems.
-
250328.270453
Regular readers may know that I’ve been interested in epistocracy for quite some time now. Epistocracy is a political regime in which political power is allocated according to criteria of competence and knowledge. …
-
260230.270463
Last week I reblogged a post from 2023 where I began a discussion of a topic in a paper by Gardiner and Zaharatos (2022) (G & Z). G & Z fruitfully trace out connections between the severity requirement and the notion of sensitivity in epistemology. …
-
299642.270475
It is shown that one common formulation of Stalnaker’s semantics for conditionals is incomplete: it has no sound and (strongly) complete proof system. At first, this seems to conflict with well-known completeness results for this semantics (e.g., Stalnaker and Thomason 1967, Stalnaker 1970 and Lewis 1973, ch. 6). As it turns out, it does not: these completeness results rely on another closely-related formulation of the semantics that is provably complete. Specifically, the difference comes down to how the Limit Assumption is stated. I close with some remarks about what this means for the logic of conditionals.
-
378904.270485
In recent decades, a gap between two kinds of physical reasoning has opened up. Applied and phenomenological physics show all basic characteristics of canonical 20th century science. But fundamental physics, represented by high energy physics model building, quantum gravity and cosmology, faces substantially new challenges that influence the nature of the scientific process. Those shifts can be expected to become even more conspicuous in the period up to 2050. Exploring their full scope will arguably be an important task for fundamental physics in upcoming decades.
-
378949.270498
The nature of branching in the many-worlds interpretation (MWI) of quantum mechanics remains an open question, particularly regarding its locality and compatibility with special relativity. This paper challenges the conventional view that branching is either global or local, demonstrating instead that it is nonlocal for entangled systems. Through a new analysis of the EPR-Bohm experiment, I argue that global branching has several potential issues and can hardly be justified. At the same time, I argue that branching cannot be entirely local, as entangled particles exhibit simultaneous, spacelike-separated branching, manifesting an apparent action at a distance within individual worlds. However, while nonlocal branching suggests the emergence of a preferred Lorentz frame within each world, the multiverse as a whole retains full Lorentz invariance, ensuring no superluminal signaling. By refining the ontology of branching and resolving tensions between MWI and relativistic constraints, this analysis may help advance our understanding of quantum nonlocality and also strengthen MWI’s standing as a viable interpretation of quantum mechanics.
-
391968.270508
Applied category theorists are flocking to AI, because that’s where the money is. I avoid working on it, both because I have an instinctive dislike of ‘hot topics’, and because at present AI is mainly being used to make rich and powerful people richer and more powerful. …
-
494246.270518
A central question in philosophy of science and epistemology of science concerns the characterization of the progress of science. Many philosophers of science and epistemologists have developed accounts of scientific progress, laying down desiderata for and providing success criteria of any account of scientific progress. Extant accounts of scientific progress are surveyed and critically assessed and it is shown that all face the same problem. The constitution-promotion distinction – a commitment shared by all the accounts – is identified as the root of the problem for the extant accounts. In their place, a novel way of understanding scientific progress – inspired by pragmatic philosophy of science and zetetic epistemology – which rejects the problematic constitution-promotion distinction, and importantly, which provides a vision of scientific progress without depending on the aim of science is developed.
-
494263.270528
We argue that special and general theories of relativity implicitly assume spacetime events correspond to quantum measurement outcomes. This leads to a change in how one should view the equivalence of spacetime and gravity. We describe a Bell test using time-like measurements that indicates a non classical causal structure that does not violate no-signaling. From this perspective, the violation of the Bell inequalities are already evidence for the non classical structure of flat spacetime as seen by an agent embedded in it. We argue that spacetime geometry can be learned by an embedded agent with internal actuators and sensors making internal measurements.
-
494287.270538
Performativity refers to the phenomenon that scientific conceptualisations can sometimes change their target systems or referents. A widely held view in the literature is that scientists ought not to deliberately deploy performative models or theories with the aim of eliciting desirable changes in their target systems. This paper has three aims. First, I cast and defend this received view as a worry about autonomy-infringing paternalism and, to that end, develop a taxonomy of the harms it can impose. Second, I consider various approaches to this worry within the extant literature and argue that these offer only unsatisfactory responses. Third, I propose two positive claims. Manipulation of target systems is (a) not inherently paternalist and can be unproblematic, and is (b) sometimes paternalist, but whenever such paternalism is inescapable, it has got to be justifiable. I generalise an example of modelling international climate change coordination to develop this point.
-
551949.270549
We evaluate the roles general relativistic assumptions play in simulations used in recent observations of black holes including LIGO-Virgo and the Event Horizon Telescope. In both experiments simulations play an ampliative role, enabling the extraction of more information from the data than would be possible otherwise. This comes at a cost of theory-ladenness. We discuss the issue of inferential circularity, which arises in some applications; classify some of the epistemic strategies used to reduce the extent of theory-ladenness; and discuss ways in which these strategies are model independent.