-
100366.337577
The belief that beauty leads to truth is prevalent among contemporary physicists. Far from being a private faith, it operates as a methodological guiding principle, essentially when physicists have to develop theories without new empirical data.
-
100419.337925
Scenarios and pathways, as defined and used in the “SSP-RCP scenario framework”, are key in last decade’s climate change research and in the latest report of the Intergovernmental Panel on Climate Change (IPCC). In this framework, Shared Socioeconomic Pathways (SSP) consist of a limited set of alternative socioeconomic futures, that are both represented in short qualitative narratives and with quantitative projections of key drivers. One important use of the computationally derived SSP-scenarios is to do mitigation analysis and present a “manageable” set of options to decision-makers. However, all SSPs and derivatively SSP-scenarios in this framework assume a globally growing economy into 2100. This, in practice, amounts to a value-laden restriction of the space of solutions to be presented to decision-makers, falling short of IPCC’s general mandate of being “policy-relevant and yet policy-neutral, never policy-prescriptive”. Yet, the Global Economic Growth Assumption (GEGA) could be challenged and in practice is challenged by post-growth scholars.
-
100443.33795
Robustness of AI alignment is one of the safety issues of large language models. Can we predict how many mistakes will a model make when responding to a restricted request? We show that when access to the model is limited to in-context learning, the number of mistakes can be proved inapproximable, which can lead to unpredictability of alignment of the model. Against intuition, this is not entirely bad news for AI safety. Attackers might not be able to easily misuse in-context learning to break alignment of the model in a predictable manner because the mistake bounds of safe responses, which were used for alignment, can be proved inapproximable. This inapproximability can hide the safe responses from attackers and make alignment of the model unpredictable. If it were possible to keep the safe responses from attackers, responsible users would benefit from testing and repairing of the model’s alignment despite its possible unpredictability. We also discuss challenges involved in ensuring democratic AI alignment with limited access to safe responses, which helps us to make alignment of the model unpredictable for attackers.
-
185523.337985
Very short summary: This essay provides an account in favor of a progressive consumption tax, in light of the efficiency and fairness issues that affect the more common progressive income tax. I argue that the progressive consumption tax not only avoids the standard incentive problem but also responds to Hayek’s critique of the unfairness of progressive taxation. …
-
249443.337999
Casajus (J Econ Theory 178, 2018, 105–123) provides a characterization of the class of positively weighted Shapley value for …nite games from an in…nite universe of players via three properties: e¢ ciency, the null player out property, and superweak differential marginality. The latter requires two players’payoffs to change in the same direction whenever only their joint productivity changes, that is, their individual productivities stay the same. Strengthening this property into (weak) differential marginality yields a characterization of the Shapley value. We suggest a relaxation of superweak differential marginality into two subproperties: (i) hyperweak differential marginality and (ii) superweak differential marginality for in…nite subdomains. The former (i) only rules out changes in the opposite direction. The latter (ii) requires changes in the same direction for players within certain in…nite subuniverses. Together with e¢ ciency and the null player out property, these properties characterize the class of weighted Shapley values.
-
261413.338017
Baumann, Peter. 2025. “Transcendental Arguments in Reid? A Reply to McCraw.” Social Epistemology Review and Reply Collective 14 (7): 1–6. https://wp.me/p1Bfg0-9Ze. Benjamin W. McCraw’s article “A Reidian Transcendental Argument Against Skepticism” (2025) constitutes an original and thought-provoking contribution both to Reid scholarship and to the discussion of epistemic skepticism. In the following I will make a few remarks about it, focusing on the discussion of skepticism. I start with a brief historical remark on Reid and Kant (§ 1) before I explain the anti-skeptical argument in some detail (§ 2). A discussion of the premises of the argument follows (§ 3). I add some remarks about the social aspect of McCraw’s anti-skeptical stance (§ 4). I finish with another set of historical remarks (§ 5), this time about Reid and Wittgenstein, and a brief conclusion (§ 6).
-
265523.33803
I’m a non-conformist, but not a reflexive contrarian. My chief goal is to enjoy every day of my life, and my non-conformism is only a means to that end. But what a means it is! By the power of non-conformism, I weasel out of hours of daily drudgery. …
-
270996.338046
According to Beebee (2018), methodological problems and intractable disagreement in philosophy suggest that we should not believe any philosophical claims, but (in brief) accept them as working hypotheses. Beebee takes van Fraasseen (1980) as inspiration, whose scientific instrumentalism recommends such “acceptance” for claims about unobservables in microphysics. In this paper, I argue that Beebee-style acceptance faces problems which have no analogue for van Fraasssen-style acceptance. In short, philosophical beliefs for the equilibrist are not optional in a way that beliefs about microphysics are. Or rather, that is so unless a fairly radical deflationism about truth and meaning is joined with equilibrism. For what it’s worth, I am not entirely opposed to this under certain qualifications. Regardless, radical deflationism would be a significant liability for equilibrism, and so it is unlikely to be welcomed by Beebee.
-
358001.338057
Historically, the hypothesis that our world is a computer simulation has struck many as just another improbable-but-possible “skeptical hypothesis” about the nature of reality. Recently, however, the simulation hypothesis has received significant attention from philosophers, physicists, and the popular press. This is due to the discovery of an epistemic dependency: If we believe that our civilization will one day run many simulations concerning its ancestry, then we should believe that we are probably in an ancestor simulation right now. This essay examines a troubling but underexplored feature of the ancestor-simulation hypothesis: the termination risk posed by both ancestor-simulation technology and experimental probes into whether our world is an ancestor simulation. This essay evaluates the termination risk by using extrapolations from current computing practices and simulation technology. The conclusions, while provisional, have great implications for debates concerning the fundamental nature of reality and the safety of contemporary physics.
-
532285.338069
Scientists decide to perform an experiment based on the expectation that their efforts will bear fruit. While assessing such expectations belongs to the everyday work of practicing scientists, we have a limited understanding of the epistemological principles underlying such assessments. Here I argue that we should delineate a “context of pursuit” for experiments. The rational pursuit of experiments, like the pursuit of theories, is governed by distinct epistemic and pragmatic considerations that concern epistemic gain, likelihood of success, and feasibility. A key question that arises is: what exactly is being evaluated when we assess experimental pursuits? I argue that, beyond the research questions an experiment aims to address, we must also assess the concrete experimental facilities and activities involved, because (1) there are often multiple ways to address a research question, (2) pursuitworthy experiments typically address a combination of research questions, and (3) experimental pursuitworthiness can be boosted by past experimental successes. My claims are supported by a look into ongoing debates about future particle colliders.
-
532308.338081
The question of which scientific ideas are worth pursuing is a fundamental challenge in science, particularly in fields where the stakes are high, and resources are limited. When the research is also time-sensitive, then the challenge becomes even greater. Philosophers of science have analyzed the pursuitworthiness of science from multiple perspectives, on topics ranging from whether there is a logic of pursuit (Feyerabend 1975; Shaw 2022), whether scientific standards ought to be relaxed in times of “fast science” (Friedman and Šešelja 2023; Stegenga 2024) as well as the role of criticism in evaluating scientific pursuits (DiMarco and Khalifa 2022).
-
532333.338093
This article revisits Taurek’s famous question: Should the greater number be saved in situations of resource scarcity? At the heart of this debate lies a central issue in normative ethics—whether numerical superiority can constitute a moral pro tanto reason. Engaging with this question helps to illuminate core principles of normative theory. Welfarismmin presents a pro-number position. The article first outlines Taurek’s original argument. It then examines non-welfarist responses and explains why they remain unsatisfactory. Finally, it identifies the main shortcomings of the hybrid welfarismmin approach and suggests a possible alternative for more adequately addressing the Taurek problem.
-
605701.338105
Tarot is widely disdained as a way of finding things out. Critics claim it is bunk or—worse— a wretched scam. This disdain misunderstands both tarot and the activity of finding thing out. I argue that tarot is an excellent tool for inquiry. It initiates and structures percipient conversation and contemplation about important, challenging, and deep topics. It galvanises creative attention, especially towards inward-looking, introspective inquiry and openminded, collaborative inquiry with others. Tarot can cultivate virtues like epistemic playfulness and cognitive dexterity.
-
618443.338121
The epistemic projection approach (EPA) is an intermediate approach to value management in science. It recognizes that there are sometimes good reasons to make research responsive to contextual values, but it achieves this responsiveness via the careful formulation of a research problem in the problem-selection stage of investigation. EPA is thus an approach that could be acceptable to some parties on both sides of the debate over the value-free ideal. Independent of this, EPA provides practitioners with concrete guidance on how to make research responsive to contextual values. This is illustrated with an example involving air pollution.
-
784932.338133
In this paper, I challenge the Consequence Argument for Incompatibilism by arguing that the inference principle it relies upon is not well motivated. The sorts of non-question-begging instances that might be offered in support of it fall short.
-
784954.338149
It is commonplace to note that libertarians about free will face a compatibility problem of their own. Indeterminism appears to be at odds with freedom rather than a condition for it, since it injects only chance or luck into the etiology of action. This problem, the luck problem, is widely regarded as unique to libertarians. However, this is false. Compatibilists face the same luck problem that animates libertarians. In this paper, I set out what the luck problem is and why compatibilists face it too. I then show that the most natural resources one might think a compatibilist should use to solve the problem are insufficient. I close with a proposal for compatibilists.
-
791487.338165
Achilles and the tortoise compete in a race where the beginning (the start) is at point O and end (the finish) is at point P. At all times the tortoise can run at a speed that is a fraction of Achilles' speed at most (with being a positive real number lower than 1, 0 < < 1), and both start the race at t = 0 at O. If the trajectory joining O with P is a straight line, Achilles will obviously win every time. It is easy to prove that there is a trajectory joining O and P along which the tortoise has a strategy to win every time, reaching the finish before Achilles.
-
866518.33818
I argue that rationality does not always require proportioning one’s beliefs to one’s evidence. I consider cases in which an agent’s evidence deteriorates over time, revealing less about the world or the agent’s location than their earlier evidence. I claim that the agent should retain beliefs that were supported by the earlier evidence, even if they are no longer supported by the later evidence. Failing to do so would violate an attractive principle of epistemic conservatism; it would foreseeably decrease the accuracy of the agent’s beliefs; it would make the agent susceptible to simple Dutch Books; it would allow them to manipulate their evidence to increase their confidence in desirable propositions over which they have no control. I defend the background assumption that dynamic considerations are relevant to epistemic rationality.
-
869133.338194
Sebens and Carroll (2018) propose that self-locating uncertainty, constrained by their Epistemic Separability Principle (ESP), derives Born rule probabilities in Everettian quantum mechanics. Their global branching model, however, leads to local amplitudes lost, undermining this derivation. This paper argues that global branching’s premature splitting of observers, such as Bob in an EPR-Bohm setup, yields local pure states devoid of amplitude coefficients essential for Born rule probabilities. Despite their innovative framework, further issues with global branching—conflicts with decoherence, relativistic violations via physical state changes, and constraints on superposition measurements—render it empirically inadequate. Defenses, such as invoking global amplitudes, fail to resolve these flaws. Additionally, observer-centric proofs of the Born rule neglect objective statistics, weakening their empirical grounding. This analysis underscores the need to reconsider branching mechanisms to secure a robust foundation for Everettian probabilities.
-
884697.338206
Imagine living in a society where most people (at least in the privileged classes) regularly participate in perpetuating a moral atrocity—slavery, say, or factory farming; any practice you’re deeply appalled by will do. …
-
1074136.338216
The present review discusses the literature on how and when social category information and individuating information influence people’s implicit judgments of other individuals who belong to existing (i.e., known) social groups. After providing some foundational information, we discuss several key principles that emerge from this literature: (a) individuating information moderates stereotype-based biases in implicit (i.e., indirectly measured) person perception, (b) individuating information usually exerts small to no effects on attitude-based biases in implicit person perception, (c) individuating information influences explicit (i.e., directly measured) person perception more than implicit person perception, (d) social category information affects implicit person perception more than it affects explicit person perception, and (e) the ability of other variables to moderate the effects of individuating information on stereotype- and attitude-based biases in implicit person perception varies. Within the discussion of each of these key points, relevant research questions that remain unaddressed in the literature are presented. Finally, we discuss both theoretical and practical implications of the principles discussed in this review.
-
1087813.338245
Synthetic media generators, such as DALL-E, and synthetic media artifacts, such as deepfakes, undermine our fundamental epistemic standards and practices. Yet, the nature of their epistemic threat remains elusive. After all, fictional or distorted representations of reality are as old as photography. We argue that the novel epistemic threat of synthetic media is that, for the first time, synthetic media tools afford ordinary computer users the practicable possibility to cheaply and effortlessly create and widely share fictional worlds indistinguishable from the real world or credible representations of it. We further argue that a synthetic media artifact is epistemically malignant in a given media context for a person acquainted with the context when the person is misled to confuse the version of the world depicted in it with the real world in an epistemically or morally significant way.
-
1214039.338268
Political meritocracy is the idea that political institutions should aim to empower those people who are particularly well-suited to rule. This article surveys recent literature in democratic theory that argues on behalf of institutional arrangements that aim to realize the ideal of political meritocracy. We detail two prominent families of meritocratic proposals: nondemocratic meritocracy and weighted voting. We then describe and briefly evaluate five potentially important criticisms of political meritocracy related to the coherence of merit as an ideal, the demographic objection, rent-seeking, political inequality, and social peace. We also consider the key ways in which existing electoral democracies create spaces for institutionally meritocratic forms. Finally, we highlight the importance of exploring institutional innovations that allow democracies to effectively incorporate expertise without, at the same time, becoming vulnerable to the criticisms of political meritocracy that we discuss.
-
1367284.338286
In this paper, we address a key question that has been central to discussions on rationality: is the concept of rationality normative or merely descriptive? We present the findings of a corpus-linguistic study revealing that people commonly perceive the concept of rationality as normative.
-
1367306.338298
As just mentioned, the Knowledge Account is a very influential view of ignorance. Recently, however, it has come under attack. Pritchard (2021a, 2021b) has offered several counterexamples that suggest ignorance has a normative dimension, which the Knowledge Account cannot easily capture (see also Meylan , 2024). Let us point out that we present these counterexamples because one of our objectives in this article is to consolidate the (possibly refutable) intuitions underlying them, using empirical data. So, here are Pritchard’s three counterexamples: First, in Pritchard’s view, it is quite unfitting to attribute ignorance of a fact to individuals when this fact cannot possibly be known. For instance, it does not sound fully appropriate to claim that “prehistorians are ignorant of whether Homo sapiens sapiens were tying their hair up.” We would rather say that they simply do not know this, or that they simply have no belief about this.
-
1367334.33832
Conceptual engineering is the practice of revising concepts to improve how people talk and think. Its ability to improve talk and thought ultimately hinges on the successful dissemination of desired conceptual changes. Unfortunately, the field has been slow to develop methods to directly test what barriers stand in the way of propagation and what methods will most effectively propagate desired conceptual change. In order to test such questions, this paper introduces the masked time-lagged method. The masked time-lagged method tests people’s concepts at a later time than the intervention without participant’s knowledge, allowing us to measure conceptual revision in action. Using a masked time-lagged design on a content internalist framework, we attempted to revise planet and dinosaur in online participants to match experts’ concepts. We successfully revised planet but not dinosaur, demonstrating some of the difficulties conceptual engineers face. Nonetheless, this paper provides conceptual engineers, regardless of framework, with the tools to tackle questions related to implementation empirically and head-on.
-
1445767.338338
The concept of infinity has long occupied a central place at the intersection of mathematics and philosophy. This paper explores the multifaceted concept of infinity, beginning with its mathematical foundations, distinguishing between potential and actual infinity and outlining the revolutionary insights of Cantorian set theory. The paper then explores paradoxes such as Hilbert’s Hotel, the St. Petersburg Paradox, and Thomson’s Lamp, each of which reveals tensions between mathematical formalism and basic human intuition. Adopting a philosophical approach, the paper analyzes how five major frameworks—Platonism, formalism, constructivism, structuralism, and intuitionism—each grapple with the metaphysical and epistemological implications of infinity. While each framework provides unique insights, none fully resolves the many paradoxes inherent in infinite mathematical objects. Ultimately, this paper argues that infinity serves not as a problem to be conclusively solved, but as a generative lens through which to ask deeper questions about the nature of mathematics, knowledge, and reality itself.
-
1452137.338352
I was delighted that Good Thoughts passed 5,000 (mostly free) subscribers a few months ago: that’s at least 4,800 more people interested in moral philosophy than I was expecting! (And it continues to grow at ~100 new subscribers each month, with no sign of a cap as yet.) …
-
1481426.338368
Have the points in Stephen Senn’s guest post fully come across? Responding to comments from diverse directions has given Senn a lot of work, for which I’m very grateful. But I say we should not leave off the topic just yet. …
-
1734162.338386
Scientific understanding typically involves multiple specialists performing interdependent tasks. According to several social-epistemological accounts, this suggests that scientific communities are collective epistemic subjects. We argue instead that the data does not warrant the postulation of a collective subject. Our position, rather, is fictionalist: we argue that the use of sentences attributing understanding to scientific communities amounts to loose talk which is best construed as indicating how social environments associated with a scientific community promote individual scientists' understanding.