-
112145.28681
Discussions on the compositionality of inferential roles concentrate on extralogical vocabulary. However, there are nontrivial problems concerning the compositionality of sentences formed by the standard constants of propositional logic. For example, is the inferential role of AB uniquely determined by those of A and B? And how is it determined? This paper investigates such questions. We also show that these issues raise matters of more significance than may prima facie appear.
-
513526.286912
While the traditional conception of inductive logic is Carnapian, I develop a Peircean alternative and use it to unify formal learning theory, statistics, and a significant part of machine learning: supervised learning. Some crucial standards for evaluating non-deductive inferences have been assumed separately in those areas, but can actually be justified by a unifying principle.
-
567833.286926
We all perform experiments very often. When I hear a noise and deliberately turn my head, I perform an experiment to find out what I will see if I turn my head. If I ask a question not knowing what answer I will hear, I am engaging in (human!) …
-
571945.286936
There is no doubt that a theory that is unified has a certain appeal. Scientific practice in fundamental physics relies heavily on it. But is a unified theory more likely to be empirically adequate than a non-unified theory? Myrvold has pointed out that, on a Bayesian account, only a specific form of unification, which he calls mutual information unification, can have confirmatory value. In this paper, we argue that Myrvold’s analysis suffers from an overly narrow understanding of what counts as evidence. If one frames evidence in a way that includes observations beyond the theory’s intended domain, one finds a much richer and more interesting perspective on the connection between unification and theory confirmation. By adopting this strategy, we give a Bayesian account of unification that (i) goes beyond mutual information unification to include other cases of unification, and (ii) gives a crucial role to the element of surprise in the discovery of a unified theory. We illustrate the explanatory strength of this account with some cases from fundamental physics and other disciplines.
-
628801.286946
There has long been an impression that reliabilism implies externalism and that frequentist statistics, due to its reliabilist nature, is inherently externalist. I argue, however, that frequentist statistics can plausibly be understood as a form of internalist reliabilism—internalist in the conventional sense, yet reliabilist in certain unconventional and intriguing ways. Crucially, in developing the thesis that reliabilism does not imply externalism, my aim is not to stretch the meaning of ‘reliabilism’ merely to sever the implication. Instead, it is to gain a deeper understanding of frequentist statistics, which stands as one of the most sustained attempts by scientists to develop an epistemology for their own use.
-
628814.286954
I once received a simple test for whether I am a frequentist or Bayesian. A coin has just been tossed, but the outcome is hidden. What is the probability that it landed heads just now? According to the test, you are a Bayesian if your answer is ‘50%, because I am 50% sure that it landed heads, and equally sure that it didn’t.’ And you are a frequentist if your answer is ‘the probability is unknown but equals either 1 or 0, depending on whether the coin actually landed heads or tails, because probabilities are frequencies of events.’ Unfortunately, this test is too simplistic to reveal the complexity underlying the seemingly binary question: ‘To be a frequentist or Bayesian?’ There is actually a spectrum of potential answers, extending from radical frequentism to radical Bayesianism, with nuanced positions in between. Let me build up the spectrum one step at a time.
-
628958.286963
The debate between scientific realism and anti-realism remains at a stalemate, making reconciliation seem hopeless. Yet, important work remains: exploring a common ground, even if only to uncover deeper points of disagreement and, ideally, to benefit both sides of the debate. I propose such a common ground. Specifically, many anti-realists, such as instrumentalists, have yet to seriously engage with Sober’s call to justify their preferred version of Ockham’s razor through a positive account. Meanwhile, realists face a similar challenge: providing a non-circular explanation of how their version of Ockham’s razor connects to truth. The common ground I propose addresses these challenges for both sides; the key is to leverage the idea that everyone values some truths and to draw on insights from scientific fields that study scientific inference—namely, statistics and machine learning. This common ground also isolates a distinctively epistemic root of the irreconcilability in the realism debate. Keywords: Scientific Realism, Instrumentalism, Ockham’s Razor, Statistics, Machine Learning, Convergence to the Truth.
-
628972.286971
The epistemology of scientific inference has a rich history. According to the explanationist tradition, theory choice should be guided by a theory’s overall balance of explanatory virtues, such as simplicity, fit with data, and/or unification (Russell 1912). The instrumentalist tradition urges, instead, that scientific inference should be driven by the goal of obtaining useful models, rather than true theories or even approximately true ones (Duhem 1906). A third tradition is Bayesianism, which features a shift of focus from all-or-nothing beliefs to degrees of belief (Bayes 1763). It may be fair to say that these traditions are the big three in contemporary epistemology of scientific inference.
-
741126.286979
Our second stop in 2025 on the leisurely tour of SIST is Excursion 4 Tour II which you can read here. This criticism of statistical significance tests continues to be controversial, but it shouldn’t be. …
-
741127.287001
When you’re investigating reality as a scientist (and often as an ordinary person) you perform experiments. Epistemologists and philosophers of science have spent a lot of time thinking about how to evaluate what you should do with the results of the experiments—how they should affect your beliefs or credences—but relatively little on the important question of which experiments you should perform epistemologically speaking. …
-
745353.287011
Prior research has unveiled a pathologization effect where individuals perceived as having bad moral character are more likely to have their conditions labeled as diseases and are less often considered healthy compared to those viewed as having a good moral character. Moreover, these individuals are perceived as less unlucky in their affliction and more deserving of it. This study explores the broader impacts of moral character on such judgments, hypothesizing that these effects reach deeper and extend to both negative and positive moral evaluations. The pathologization effect also raises concerns about potential discrimination and the overmedicalization of normal health variations, so we also examine whether providing more detailed descriptions of conditions mitigates the influence of judgments of moral character. The methodology and broader implications of our findings are discussed, emphasizing the need for a deeper understanding of how moral judgments might influence patient care.
-
964049.287021
Report on the Conference on Probabilistic Reasoning in the Sciences which took place at the Marche Polytechnic University in Ancona, Italy. 29-31 August 2024. Keywords Probabilistic reasoning; Science; Methodology.
-
1035191.28703
We consider the following question: how close to the ancestral root of a phylogenetic tree is the most recent common ancestor of k species randomly sampled from the tips of the tree? For trees having shapes predicted by the Yule–Harding model, it is known that the most recent common ancestor is likely to be close to (or equal to) the root of the full tree, even as n becomes large (for k fixed). However, this result does not extend to models of tree shape that more closely describe phylogenies encountered in evolutionary biology. We investigate the impact of tree shape (via the Aldous β−splitting model) to predict the number of edges that separate the most recent common ancestor of a random sample of k tip species and the root of the parent tree they are sampled from. Both exact and asymptotic results are presented. We also briefly consider a variation of the process in which a random number of tip species are sampled.
-
1149051.287039
We rigorously describe the relation in which a credence function should stand to a set of chance functions in order for these to be compatible in the way mandated by the Principal Principle. This resolves an apparent contradiction in the literature, by means of providing a formal way of combining credences with modest chance functions so that the latter indeed serve as guides for the former. Along the way we note some problematic consequences of taking admissibility to imply requirements involving probabilistic independence. We also argue, contra [12], that the Principal Principle does not imply the Principal of Indifference.
-
1192247.287048
We study Doob’s Consistency Theorem and Freedman’s Inconsistency Theorem from the vantage point of computable probability and algorithmic randomness. We show that the Schnorr random elements of the parameter space are computably consistent, when there is a map from the sample space to the parameter space satisfying many of the same properties as limiting relative frequencies. We show that the generic inconsistency in Freedman’s Theorem is effectively generic, which implies the existence of computable parameters which are not computably consistent. Taken together, this work provides a computability-theoretic solution to Diaconis and Freedman’s problem of “know[ing] for which [parameters] θ the rule [Bayes’ rule] is consistent” ([DF86, 4]), and it strengthens recent similar results of Takahashi [Tak23] on Martin-Lof randomness in Cantor space.
-
1313936.287056
Many factors contribute to whether juries reach right verdicts. Here we focus on the role of diversity. Direct empirical studies of the effect of altering factors in jury deliberation are severely limited for conceptual, practical, and ethical reasons. Using an agent-based model to avoid these difficulties, we argue that diversity can play at least four importantly different roles in affecting jury verdicts. We show that where different subgroups have access to different information, equal representation can strengthen epistemic jury success, and if one subgroup has access to particularly strong evidence, epistemic success may demand participation by that group. Diversity can also reduce the redundancy of the information on which a jury focuses, which can have a positive impact. Finally, and most surprisingly, we show that limiting communication between diverse groups in juries can favor epistemic success as well.
-
1317296.287065
This week represents the convergence of so many plotlines that, if it were the season finale of some streaming show, I’d feel like the writers had too many balls in the air. For the benefit of the tiny part of the world that cares what I think, I offer the following comments. …
-
1429880.287073
“From the Archives” is a new blog series that will share some of my favorite posts, lightly revised and updated, from my 18 years of archives at philosophyetc.net. I’ll kick things off with my undergraduate honours thesis on “Modal Rationalism”, which I think remains a neat general introduction to some core issues in metaphysics, modal epistemology, and the philosophy of language. …
-
1437299.287082
Since the early debates on teleosemantics, there have been people objecting that teleosemantics cannot account for evolutionarily novel contents such as “democracy” (e.g., Peacocke 1992). Most recently, this objection was brought up by Garson (2019) and in a more moderate form by Garson & Papineau (2019). The underlying criticism is that the traditional selected effects theory of functions on which teleosemantics is built is unable to ascribe new functions to the products of ontogenetic processes and thus unable to ascribe functions to new traits that appear during the lifetime of an individual organism. I will argue that this underlying thought rests on rather common misunderstandings of Millikan’s theory of proper functions, especially her notions of relational, adapted, and derived proper functions (Millikan 1984: Ch. 2). The notions of relational, adapted, and derived proper functions not only help us solve the problem of novel contents and can ascribe functions to the products of ontogenetic selection mechanisms but are indispensable parts of every selected effects theory.
-
1665129.287094
If an agent can’t live up to the demands of ideal rationality, fallback norms come into play that take into account the agent’s limitations. A familiar human limitation is our tendency to lose information. How should we compensate for this tendency? The Seeping Beauty problem allows us to isolate this question, without the confounding influence of other human limitations. If the coin lands tails, Beauty can’t preserve whatever information she has received on Monday: she is bound to violate the norms of ideal diachronic rationality. The considerations that support these norms, however, can still be used. I investigate how Beauty should update her beliefs so as to maximize the expected accuracy of her new beliefs. The investigation draws attention to important but neglected questions about the connection between rational belief and evidential support, about the status of ideal and non-ideal norms, about the dependence of epistemic norms on descriptive facts, and about the precise formulation of expected accuracy measures. It also sheds light on the puzzle of higher-order evidence.
-
1668138.287103
In their recent defense of randomization, Martinez and Teira (2022) endorsed Worrall’s (2002; 2007) arguments that randomization does not assert the balance of confounding factors and delivered two other epistemic virtues of random assignment (efficiency balance and Fisherian balance). Worrall’s criticism claiming that randomization does not assert Millean balance shape the philosophical debates concerned with the role of randomization in causal inference and evidence hierarchies in medicine. We take issue with Worrall’s claim that randomization does not assert the balance of confounders. First, we argue that randomization balances the influence of confounders on an outcome in the statistical sense. Second, we analyze the potential outcome approach to causal inference and show that the average treatment effect (ATE) is an unbiased estimator of the average causal effect and observe that actual causal inferences rely on randomization balancing the impact of confounders.
-
1778717.287112
Nathan enjoys spending time on X. He finds discussions on this platform entertaining, though sometimes rude. He thinks that things have changed since Elon Musk took control of the social network to turn it into a political and ideological weapon. …
-
1874272.28712
Wittgenstein distinguished between two uses of ‘I’, one “as object” and the other “as subject”, a distinction that Shoemaker elucidated in terms of a notion of immunity to error through misidentification (‘IEM’); in their use “as subject”, first-personal claims are IEM, but not in their use “as object”. Shoemaker argued that memory judgments based on “personal”, episodic memory are only de facto IEM, not strictly speaking IEM, while Gareth Evans disputed it. In the past two decades research on memory has produced very significant results, which have changed the philosophical landscape. As part of it, several new arguments have been made for and against the IEM of personal memories. The paper aims to defend the Shoemaker line by critically engaging with some compelling recent contributions.
-
1874292.28713
Wittgenstein distinguished between two uses of “I”, one “as object” and the other “as subject”, a distinction that Shoemaker elucidated in terms of a notion of immunity to error through misidentification (“IEM”); first-personal claims are IEM in the use “as subject”, but not in the other use. Shoemaker argued that memory judgments based on “personal”, episodic memory are not strictly speaking IEM; Gareth Evans disputed this. Similar issues have been debated regarding self-ascriptions of conscious thoughts based on first- personal awareness, in the light of claims of “thought insertion” in schizophrenic patients. The paper aims to defend a Shoemaker-like line by critically engaging with some compelling recent contributions. Methodologically, the paper argues that to properly address these issues the all-inclusive term “thought” should be avoided, and specific types of thoughts countenanced.
-
2051901.28714
This paper presents a novel approach: using a digital calculation method for propositional logical reasoning. The paper demonstrates how to discover the primitive numbers and the digital calculation formulas by analyzing the truth tables. Then it illustrates how to calculate and compare the truth values of various expressions by using the digital calculation method. As an enhanced alternative to existing approaches, the proposed method transforms the statement-based or table-based reasoning into number-based reasoning. Thereby, it eliminates the need for using truth tables, and obviates the need for applying theorems, rewriting statements, and changing symbols. It provides a more streamlined solution for a single reasoning, while demonstrating more efficiency for multiple reasonings in long-term use. It is suitable for manual calculation, large-scale computation, AI and automated reasoning.
-
2167259.287151
The so-called Geometric Trinity of Gravity includes General Relativity (GR), based on spacetime curvature; the Teleparallel Equivalent of GR (TEGR), which relies on spacetime torsion; and the Symmetric Teleparallel Equivalent of GR (STEGR), grounded in nonmetricity. Recent studies demonstrate that GR, TEGR, and STEGR are dynamically equivalent, raising questions about the fundamental structure of spacetime, the under-determination of these theories, and whether empirical distinctions among them are possible. The aim of this work is to show that they are equivalent in many features but not exactly in everything. In particular, their relationship with the Equivalence Principle (EP) is different.
-
2282668.287161
Pursuing a scientific idea is often justified by the promise associated with it. Philosophers of science have proposed a variety of approaches to such promise, including more specific indicators. Economic models in particular emphasise the trade-off between an idea’s benefits and its costs. Taking up this Peirce-inspired idea, we spell out the metaphor of such a cost-benefit analysis of scientific ideas. We show that it fruitfully urges a set of salient meta-methodological questions that accounts of scientific pursuit-worthiness ought to address. In line with such a meta-methodological framework, we articulate and explore an appealing and auspicious concretisation—what we shall dub “the virtue-economic account of pursuit-worthiness”: cognitive benefits and costs of an idea, we suggest, should be characterised in terms of an idea’s theoretical virtues, such as empirical adequacy, explanatory power, or coherence. Assessments of pursuit-worthiness are deliberative judgements in which scientifically competent evaluators weigh and compare the prospects of such virtues, subject to certain rationality constraints that ensure historical and contemporary scientific circumspection, coherence and systematicity. The virtue-economic account, we show, sheds new light on the normativity of scientific pursuit, methodological pluralism in science, and the rationality of historical science.
-
2361203.287172
Our first stop in 2025 on the leisurely tour of SIST is Excursion 4 Tour I which you can read here. I hope that this will give you the chutzpah to push back in 2025, if you hear that objectivity in science is just a myth. …
-
2455761.287183
The paper analyzes the notion of exploration that can be found in the distinction between exploratory and confirmatory research, which is sometimes appealed to in the metascience literature. We argue that this notion (a) differs in important respects from previous works in exploratory data analysis and (b) contains some counterintuitive assumptions about the nature of exploration. Engaging with works in the history and philosophy of experimentation and modeling, we develop and defend a more comprehensive and accurate notion of exploration and argue that it is better suited for a normative analysis of exploratory research.
-
2467399.287194
Miles Tucker’s (2022) ‘Consequentialism and Our Best Selves,’ defends a “maximizing theory of moral motivation”, on which we should have just those motives (among those “available” to us) that would make things go best. …