-
7688.55408
I examine Howson’s alluring suggestion that Bayesianism, by supplying a logic of inductive inference—conditionalisation—solves the problem of induction. I draw on his historical heritage, especially Hume, Peirce, and Ramsey, to reconstruct the interpretation of the problem of induction that his remarks intimate. Roughly, it is that of how to amend the system with which one meets the world, in the light of new particulars. Unfortunately, his claim that conditionalisation constitutes a solution to this problem, I argue, fails by his own lights, because it turns on the widely endorsed but nonetheless erroneous contention that a justification of conditionalisation qua rule of inference can be given independently from a justification of the priors.
-
7712.554138
Bell’s inequality is derived from three assumptions: measurement independence, outcome independence, and parameter independence. Among these, measurement independence, often taken for granted, holds that hidden variables are statistically uncorrelated with measurement settings. Under this assumption, the violation of Bell’s inequality implies that either outcome independence or parameter independence fails to hold, meaning that local hidden variables do not exist. In this paper, we refer to this interpretive stance as the nonfactorizable position. In contrast, superdeterminism represents the view that measurement independence does not hold. Despite its foundational role, this assumption has received relatively little philosophical scrutiny. This paper offers a philosophical reassessment of measurement independence through three major frameworks in the philosophy of science: de Regt’s contextual theory of scientific understanding, Kuhn’s criteria for theory choice, and Lakatos’s methodology of scientific research programmes. Using these lenses, we evaluate the two major responses to the violation of Bell’s inequality, the nonfactorizable position and superdeterminism, and argue that the nonfactorizable position currently fares better across all three criteria. Beyond this binary, we introduce a spectrum of intermediate positions that allow for partial violations of measurement independence, modeled via mutual information. These positions modify the “positive heuristic” of superdeterminism, a crucial component in Lakatos’s definition of research programmes, offering avenues for progressive research. This analysis reframes the debate surrounding Bell’s inequality and illustrates how methodological tools can effectively guide theory evaluation in physics.
-
91432.554147
For various reasons, it has become common wisdom in science that there exists a principled epistemic distinction between direct and indirect observation. In this paper, I present a twofold argument. First, I argue against such a principled epistemic distinction. Second, I highlight a pervasive incongruence between the methodological and epistemological distinctions between direct and indirect observations. My arguments revolve around the idea that it is one thing to make a methodological distinction between observations and another to ascribe epistemic significance to them. I begin by unfolding the historical and philosophical foundations of the distinction, identifying three tenets that have served to sustain the distinction to the present day. I then provide a detailed analysis of two recent philosophical efforts to preserve the epistemic distinction in astrophysics and specific areas of astrophysics, ultimately suggesting that these approaches face significant challenges.
-
91455.554157
This paper addresses the accelerating crisis of ethical governance in an age of complex socio-technical change, particularly in the domain of Artificial Intelligence. It poses a foundational philosophical question: when, if ever, is AI assistance in ethical deliberation legitimate? An answer is developed through three theses: i) the Ethical No-Free-Lunch (ENFL) principle, which establishes the indispensability of human normative intervention and accountability; ii) the Discovery/Justification Separation inspired by Reichenbach’s work, which restricts AI use to the exploratory “context of discovery”; iii) the Algorithmic Mediated Control Framework (AMCF), which mandates that only scrutable, human-vetted deterministic algorithms generated with AI assistance, and not the AI itself, be entrusted with critical societal processes. From these theses, five legitimacy criteria for AI-assisted ethical deliberation are derived. Finally, the paper proposes the “AI-assisted Iterative Method for Ethical Deliberation” (AIMED), an actionable multi-stage workflow that fulfills the exposed criteria for ethical AI-assisted deliberation. This method integrates digital literature analysis, structured human–AI dialogue, human-only verification, and continuous feedback. The paper explicitly addresses several potential objections. It is shown how the AIMED framework aligns with and provides a concrete implementation for major international regulatory guidelines, such as the EU AI Act and the NIST AI Risk Management Framework. By situating the AIMED within traditions of proceduralism, the governance of inductive risk, and human–AI collaboration, the paper argues that this framework offers a philosophically justified, practically implementable model of AI-assisted ethical governance, that can be seen as an actionable instance of Digital Humanism.
-
91558.554163
We investigate the epistemic role of coherence in scientific reasoning, focusing on its use as a heuristic for filtering evidence. Using a novel computational model based on Bayesian networks, we simulate agents who update their beliefs under varying levels of noise and bias. Some agents treat reductions in coherence as higher-order evidence and interpret such drops as signals that something has gone epistemically awry, even when the source of error is unclear. Our results show that this strategy can improve belief accuracy in noisy environments but tends to mislead when evidence is systematically biased. We explore the implications for the rationality of coherence-based reasoning in science.
-
118747.554169
Very short summary: This essay examines the extension of marginalism to morality and politics. The marginalist reasoning principle holds that past decisions are irrelevant when assessing the rationality of current and future choices. …
-
177846.554175
The success of AlphaFold, an AI that predicts protein structures, poses a challenge for traditional understanding of scientific knowledge. It generates predictions that are not empirically tested, without revealing the principles behind its predictive success. The paper presents an epistemological trilemma, forcing us to reject one of 3 claims: (1) AlphaFold produces scientific knowledge; (2) Predictions alone are not scientific knowledge unless derivable from established scientific principles; and (3) Scientific knowledge cannot be strongly opaque. The paper defends (1) and (2) and draws on Alexander Bird's functionalist, anti-individualist account of scientific knowledge, to accommodate AlphaFold's production of strongly opaque knowledge in science.
-
437939.554181
Mature scientific hypotheses are confirmed by large amounts of independent evidence. How could anyone be an anti-realist under these conditions? A classic response appeals to confirmational holism and underdetermination, but it is unclear whether traditional arguments succeed. I offer a new line of argument: If holism is interpreted as saying that the confirmation of every part of a hypothesis depends on the confirmation of the whole hypothesis, we must formulate conditions under which the confirmation received by the whole can be transferred to its parts. However, underdetermination suggests that relevant conditions are typically not met. If this is true, the confirmation received by the whole remains bounded by the priors for the parts, and we lack compelling reasons to believe substantive hypotheses based on evidence beyond the degree to which the posits involved in them are antecedently believed. A rejoinder comes from selective realism: If some posit is preserved throughout theory change, it is confirmed beyond the degree to which the containing hypothesis is. However, the variant of holism considered here exactly implies that we cannot confirm such posits in isolation. As I will show, the realist is thus forced into a dilemma: Either she succumbs to the holistic challenge, or she must embrace meta-empirical facts, such as the posit’s recurrence, as confirmatory.
-
610507.554186
In this paper, I outline an epistemology of evidential reasoning in the history and philosophy of science (HPS). Drawing upon some prominent works in HPS as case studies, I formulate three novel epistemological desiderata for using historical case studies as evidence for philosophical claims about science, to wit: independent historical evidence, metahistorical criticism, and disciplinary alignment. These desiderata pick out some epistemic qualities and contribute to the achievement of the primary goal of evidential reasoning, which is to confer justification upon philosophical conclusions on the basis of historical evidence. In this way, my proposed epistemology tackles the “methodological” problem of vicious circularity and the “metaphysical” problem of disciplinary unsuitability that allegedly jeopardise HPS practice, thereby vindicating its positive epistemic status.
-
637302.554192
Very short summary: In this essay, I discuss an objection to my claim that Hayek’s argument against progressive taxation doesn’t apply to the progressive consumption tax. I concede that under a steady state where growth has stalled, the claim falls. …
-
712289.554197
In this paper, we propose a novel algorithm for epistemic planning based on dynamic epistemic logic (DEL). The novelty is that we limit the depth of reasoning of the planning agent to an upper bound b, meaning that the planning agent can only reason about higher-order knowledge to at most (modal) depth b. The algorithm makes use of a novel type of canonical b-bisimulation contraction guaranteeing unique minimal models with respect to b-bisimulation. We show our depth-bounded planning algorithm to be sound. Additionally, we show it to be complete with respect to planning tasks having a solution within bound b of reasoning depth (and hence the iterative bound-deepening variant is complete in the standard sense). For bound b of reasoning depth, the algorithm is shown to be (b + 1)-E XPTIME complete, and furthermore fixed-parameter tractable in the number of agents and atoms. We present both a tree search and a graph search variant of the algorithm, and we benchmark an implementation of the tree search version against a baseline epistemic planner.
-
720613.554202
An anonymous reader sent me this critique of my “Dynamic Case for Non-Compete,” featured in Pro-Market and Pro-Business: Essays on Laissez-Faire. Enjoy! You argue that non-competes can be beneficial since they make companies willing to share sensitive IP to employees. …
-
873538.554208
This paper explores whether people are more likely to recognize inconsistency in others’ judgments than in their own, and if so, why. It reports two pre-registered online experiments with samples representative of the UK population (N = 814 and N = 1,623). In Study 1, people are more likely to recognize inconsistency in others’ moral (and non-moral) judgments than in their own. Study 2 replicates this finding and tests three explanations: (i) motivated reasoning, (ii) selective cognitive effort, and (iii) limited insight into others’ reasoning. Ad (i), because people’s susceptibility to motivated reasoning is said to diminish when people must account for their judgments, the presence of motivated reasoning was examined by manipulating social accountability. No effect was found. Ad (ii), while people spent significantly more time (a proxy for cognitive effort) on reviewing others’ consistency than their own, this explained only a fraction of the greater rate at which inconsistencies in others’ reasoning were recognized. Ad (iii), using low confidence in consistency evaluations as a proxy for limited insight, the study did not find support for the limited insight hypothesis. The finding that people are better at recognizing inconsistency in others’ moral judgments aligns with the idea that moral consistency reasoning is a social device that operates best when we interact with others, but more research is needed to uncover the psychological mechanisms behind this effect.
-
880850.554213
Warning: I worry there may be something wrong in the reasoning below. Causal Decision Theory (CDT) and Epistemic Decision Theory (EDT) tend to disagree when the payoff of an option statistically depends on your propensity to go for that option. …
-
1216588.554219
Democratic theorists and social epistemologists often celebrate the epistemic benefits of diversity. One of the cornerstones is the ‘diversity trumps ability’ result by Hong and Page (2004). Ironically, the interplay between diversity and ability is rarely studied in radically different frameworks. Although diversity has been studied in prediction and search problems, the diversity-expertise tradeoff has not been studied systematically for small, deliberative groups facing binary classification problems. To fill this gap, I will introduce a new evidential sources framework and study whether, when, and (if so) why diversity trumps expertise in binary classification problems.
-
1402322.554225
We apply recent ideas about complexity and randomness to the philosophy of laws and chances. We develop two ways to use algorithmic randomness to characterize probabilistic laws of nature. The first, a generative chance law, employs a nonstandard notion of chance. The second, a probabilistic constraining law, impose relative frequency and randomness constraints that every physically possible world must satisfy. The constraining notion removes a major obstacle to a unified governing account of non-Humean laws, on which laws govern by constraining physical possibilities; it also provides independently motivated solutions to familiar problems for the Humean best-system account (the Big Bad Bug and the zero-fit problem). On either approach, probabilistic laws are tied more tightly to corresponding sets of possible worlds: some histories permitted by traditional probabilistic laws are now ruled out as physically impossible. Consequently, the framework avoids one variety of empirical underdetermination while bringing to light others that are typically overlooked.
-
1560107.554232
What is it for y to be objectively qualitatively overall at least as similar to x as z is? This paper defends a version of the following answer: it is for y to be at least as similar to x as z is in every qualitative respect. On the version defended in this paper, this analysis arguably entails that it is possible for some things to objectively qualitatively resemble each other more than they do other things. However, it also arguably entails that, given how the world contingently is, many things (if not all things) are incomparable in objective qualitative resemblance, where y and z are so incomparable to x iff: i) it is not the case that y is at least as objectively qualitatively similar to x as z is, and ii) it is not the case that z is at least as objectively qualitatively similar to x as y is.
-
1737242.55424
[An excerpt from Beyond Right and Wrong.] Some rights can be expected to promote overall well-being. Utilitarianism endorses these. Other rights lack this utilitarian property: they protect people against harmful interventions, but at greater cost to others who miss out on helpful interventions as a result. …
-
1843830.554245
I specialize in trillion-dollar ideas: policy reforms which, if implemented, would generate trillions of dollars of net social benefits. Ideas like open borders, educational austerity, and by-right construction. …
-
1917475.554251
Suppose there are two opaque boxes, A and B, of which I can choose one. A nearly perfect predictor of my actions put $100 in the box that they thought I would choose. Suppose I find myself with evidence that it’s 75% likely that I will choose box A (maybe in 75% of cases like this, people like me choose A). …
-
1930222.554256
In 2015, Amy Finkelstein, Nathaniel Hendren, and Erzo Luttmer released an NBER working paper called “The Value of Medicaid: Interpreting Results from the Oregon Health Insurance Experiment.” The paper’s results were a slap in the face of Social Desirability Bias — and the authors boldly advertised them right in the abstract:
Our baseline estimates of Medicaid's welfare benefit to recipients per dollar of government spending range from about $0.2 to $0.4, depending on the framework, with at least two-fifths – and as much as four-fifths – of the value of Medicaid coming from a transfer component, as opposed to its ability to move resources across states of the world. …
-
2077783.554262
We present a causal model for the EPR correlations. In this model, or better framework for a model, causality is preserved by the direct propagation of causal influences between the wings of the experiment. We show that our model generates the same statistical results for EPR as orthodox quantum mechanics. We conclude that causality in quantum mechanics can not be ruled out on the basis of the EPR-Bell- Aspect correlations alone.
-
2104792.554267
When thinking about big social problems like climate change or factory farming, there are two especially common failure modes worth avoiding:
Neglecting small numbers that incrementally contribute to significant aggregate harms. …
-
2599711.554273
It is a stark truth that the prison system in the United States is a moral catastrophe. Many of those who go to prison are routinely subject to battery, assault, and rape, or live in constant fear thereof. Incarcerated individuals are forced to align with gangs to protect themselves. They are treated by guards and other prison officials in deeply dehumanizing ways, subjected to psychological torture through solitary confinement and other measures, and sometimes inhabit literally unlivable conditions.
-
2625596.554278
Cognitive scientists ascribe inferential processes to (neuro)cognitive systems to explain many of their capacities. Since these ascriptions have different connotations, philosophical accounts of inference could help clarify their assumptions and forestall potential confusion. However, many existing accounts define inference in ways that are out of touch with successful scientific practice – ways that overly intellectualise inference, construe inference in complete opposition to association, and imply that inferential processes prevent minds from being in contact with the outside world. In this chapter, we combine Siegel’s (2017) Response Hypothesis with insights from basal cognition and ecological rationality to sketch a philosophically viable, updated account of inference in (neuro)cognitive systems. According to this view, inference is a kind of rationally evaluable transition from some inputs or current representations to some conclusion or output representation. This notion of inference aligns with and can illuminate scientific practices in disparate fields, while eschewing a commitment to a consciously accessible language-like neural code or a formal system of mental logic, highlighting the continuity between inferential and associative processes, and allowing for a non-indirect mind-world relationship, where minds are genuinely open and responsive to their environment.
-
2823258.554285
A general class of presupposition arguments holds that the background knowledge and theory required to design, develop, and interpret a machine learning (ML) system imply a strong upper limit to ML’s impact on science. I consider two proposals for how to assess the scientific impact of ML predictions, and I argue that while these accounts prioritize conceptual change, the presuppositions they take to be disqualifying for strong novelty are too restrictive. I characterize a general form of their arguments I call the Concept-free Design Argument: that strong novelty is curtailed by utilizing prior conceptualizations of target phenomena in model design.
-
3043079.554291
Blame abounds in our everyday lives, perhaps no more so than on social media. With the rise of social networking platforms, we now have access to more information about others’ blameworthy behaviour and larger audiences to whom we can express our blame. But these audiences, while large, are not typically diverse. Just as we tend to gather and share information within online social networks made up of like-minded individuals, much of the moral criticism found on the internet is expressed within groups of agents with similar values and worldviews. Like these epistemic practices, the blaming practices found on social media have also received criticism. Many argue that the blame expressed on the internet is unfitting, excessive, and counterproductive. What accounts for the perniciousness of online blame? And what should be done to address it?
-
3043159.554296
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License doi.org/10.3998/phimp.3806 Suppose you are 40% confident that Candidate X will win in the upcoming election. Then you read a column projecting 80%. If you and the columnist are equally well informed and competent on this topic, how should you revise your opinion in light of theirs? Should you perhaps split the difference, arriving at 60%? Plenty has been written on this topic. Much less studied, however, is the question what comes next. Once you’ve updated your opinion about Candidate X, how should your other opinions change to accommodate this new view? For example, how should you revise your expectations about other candidates running for other seats? Or your confidence that your preferred party will win a majority?
-
3043305.554302
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License doi.org/10.3998/phimp.3416 evidential states giving rise to those credences. As a result, traditional approaches fail to capture the multitude of individual evidential states which can lead to the same group credences. This occurs when we fail to account for dependence among individuals and the resilience of their beliefs. Such omissions are not innocuous: they can underdetermine both the group belief and its updating strategy. We present an approach that allows one to focus instead on appropriately combining evidence, and in particular taking into account any overlaps in information. Once the evidence is properly captured, we will show, a full group distribution can be uniquely established on its basis. From this distribution, we can derive point estimates, intervals, and predictions. We call this the evidence-first method, in part to distinguish our approach from prevailing rules for combining beliefs, which may more accurately be described as credence-first.
-
3043331.554307
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License <doi.org/10.3998/phimp.5853 > science. I will argue that it is false. Rational belief need not be proportioned to the evidence. Nor, of course, does it succumb to prejudice and wishful thinking. The evidentialist doctrine is false because it clashes with compelling norms on the dynamics of rational belief. I’m going to illustrate this clash by looking at scenarios in which an agent’s evidence deteriorates over time, revealing less about the world or the agent’s location than their earlier evidence. According to the evidentialist doctrine, the agent’s beliefs should follow their deteriorating evidence: the agent should lose their confidence in propositions for which they used to have good evidence, without having received any contrary evidence. I will argue that the agent should instead follow a “conservative” policy and retain the earlier beliefs.