-
1403614.318375
Moral arguments against the consumption of animal products from factory farms are traditionally categorical. The conclusions require people to eliminate from their diets all animal products (veganism), all animal flesh (vegetarianism), all animals except seafood (pescetarianism), etc. An alternative “reducetarian” approach prescribes progressive reduction in one's consumption of animal products, not categorical abstention. We articulate a much-needed moral defense of this more ecumenical approach. We start with a presumptive case in favor of reducetarianism before moving on to address three objections—that it falls short of our obligations to address such an egregious practice, is a rationalization of the status quo, and cannot fix systemic injustices in animal agriculture. We conclude that reducetarianism is a defensible approach for many people and is a promising route to moral progress on factory farming.
-
1411057.318491
Mechanistic theories of explanation are widely held in the philosophy of science, especially in philosophy of biology, neuroscience and cognitive science. While such theories remain dominant in the field, there have been an increasing number of challenges raised against them over the past decade. These challenges claim that mechanistic explanations can lead to incoherence, triviality, or deviate too far from how scientists in the life sciences genuinely employ the term “mechanism”. In this paper, I argue that these disputes are fueled, in part, by the running together of distinct questions and concerns regarding mechanisms, representations of mechanisms, and mechanistic explanation. More care and attention to how these are distinct from one another, but also the various ways they might relate, can help to push these disputes in more positive directions.
-
1411110.318504
—This study examines how large language models (LLMs) transform knowledge and literature from a technocentric perspective. While LLMs centralize human knowledge and reconstruct it in a relational memory framework, research indicates that when trained on their own data, they experience “model collapse.” Experiments reveal that as generations progress, language deteriorates, variance decreases, and confusion increases. While humans refine their language through reading, machines encounter epistemological ruptures due to statistical errors. Artificial literature diverges from human literature; machine-generated texts are a literary illusion. LLMs can be regarded as a technological phenomenon that instrumentalizes human knowledge, tilting the subject-object balance in favor of the machine and creating its own “culture.” They signal a shift from a human-centered paradigm to a knowledge-centered approach. This study questions the boundaries of artificial literature and whether machine language can be considered “knowledge,” while exploring the transformations in the human-machine relationship.
-
1428399.318521
This post is free to read, so please share it widely. And, as always, please ‘like’ it via the heart below and restack it on notes if you get something out of it. It’s the best way to help others find my work. …
-
1440381.318531
Maria Montessori (1870–1952) was one of the most influential
pedagogues of the late nineteenth and early twentieth centuries,
developing an educational method that currently guides over 15,000
schools in dozens of countries. Montessori was never merely a teacher,
however. She was a psychologist, anthropologist, doctor, cultural
critic, and philosopher. Her writings span a wide range of
philosophical issues, from metaphysics to political philosophy, but
she always discusses philosophical issues in ways that make use of
insights—what she calls revelations—gleaned from her work
with children. In recent years, philosophers have begun to attend to
her work.
-
1452971.318548
In this short note, which is the final chapter of the volume 60 Years of Connexive Logic, we list ten open problems. Some of these problems are technical and precisely stated, while others are less technical and even speculative. We hope that the list inspires some readers to contribute to the field by tackling one or many of the problems.
-
1452993.318558
The present article aims at generalizing the approach to connexive logic that was initiated in [27], by following the work by Paul Egré and Guy Politzer. To this end, a variant of the connexive modal logic CK is introduced and some basic results including soundness and completeness results are established. A tableau calculus is also presented in an appendix.
-
1460248.318567
Time-travel fiction commonly depicts time travelers who encounter their past selves or, in the grandfather paradox, their ancestors. In traditional fictional representations of time travel, such as in H. G. Wells’s The Time Machine, travelers age in the same time sense as those visited in the past and future. Elsewhere, fantasy fiction supplies another possibility: the wizard Merlyn in T. H. White’s 1938 fantasy novel, The Sword in the Stone, meets a young Arthur. Merlyn ages in the opposite time sense to Arthur. Arthur’s first meeting with Merlyn is Merlyn’s last meeting with Arthur; and Arthur’s last meeting with him is Merlyn’s first. We can imagine time travelers who arrive in the past to meet their former selves, but now age in the opposite time sense. They are still time travelers since they are meeting their past selves. However, we have now added a twist from another part of the fantasy literature.
-
1468351.318577
In Family Values, Harry Brighouse and Adam Smith ask whether children need parents. That inquiry seems a wild project, but then philosophers are supposed to question everything and follow the argument where it leads. …
-
1512845.318586
This paper investigates two forms of the Routley star operation, one in Routley & Routley 1972 and the other in Leitgeb 2019. We use object theory (OT) to define both forms and show that in OT’s hyperintensional logic, (a) the two forms aren’t equivalent, but (b) become equivalent under certain conditions. We verify our definitions by showing that the principles governing both forms become derivable and need not be stipulated. Since no mathematics is assumed in OT, the existence of the Routley star image s of a situation s is therefore guaranteed not by set theory but by a theory of abstract objects. The work in the paper integrates Routley star into a more general theory of (partial) situations that has previously been used to develop the theory of possible worlds and impossible worlds.
-
1526392.318595
In this paper, we provide an axiom system for the relevant logic of equivalence relation frames and prove completeness for it. This provides a partial answer to the longstanding open problem of axiomatizing frames for relevant modal logics where the modal accessibility relation is symmetric. Following this, we show that the logic enjoys Hallden completeness and that a related logic enjoys the disjunction property.
-
1541391.318604
James Gleick’s NYRB essay on the history of futurology begins on a mordant note:
Invited to compose a message for posterity to be buried in a time capsule at the 1939 New York World’s Fair and opened five thousand years later, Albert Einstein sounded a dour tone: “Anyone who thinks about the future must live in fear and terror.”
But the highlights of Gleick’s review—drawn from Glenn Adamson’s book, A Century of Tomorrows—are ostensibly utopian. …
-
1561817.318613
Through a series of empirical studies involving native speakers of English, German, and Chinese, this paper reveals that the predicate “true” is inherently ambiguous in the empirical domain. Truth statements such as “It is true that Tom is at the party” seem to be ambivalent between two readings. On the first reading, the statement means “Reality is such that Tom is at the party.” On the second reading, the statement means “According to what X believes, Tom is at the party.” While there appear to exist some cross-cultural differences in the interpretation of the statements, the overall findings robustly indicate that “true” has multiple meanings in the realm of empirical matters.
-
1561837.318623
Semantic features are components of concepts. In philosophy, there is a predominant focus on those features that are necessary (and jointly sufficient) for the application of a concept. Consequently, the method of cases has been the paradigm tool among philosophers, including experimental philosophers. However, whether a feature is salient is often far more important for cognitive processes like memory, categorization, recognition and even decision-making than whether it is necessary. The primary objective of this paper is to emphasize the significance of researching salient features of concepts. I thereby advocate the use of semantic feature production tasks, which not only enable researchers to determine whether a feature is salient, but also provide a complementary method for studying ordinary language use. I will discuss empirical data on three concepts, conspiracy theory, female/male professor, and life, to illustrate that semantic feature production tasks can help philosophers (a) identify those salient features that play a central role in our reasoning about and with concepts, (b) examine socially relevant stereotypes, and (c) investigate the structure of concepts.
-
1581510.318632
Hume [Hume 1739: bk.I pt.III sec.XI] held, incredibly, that objective chance is a projection of our beliefs. Bruno de Finetti [1970] gave mathematical substance to this idea. Scientific reasoning about chance, he argued, should be understood as arising from symmetries in degrees of belief. De Finetti’s gambit is popular in some quarters of statistics and philosophy – see, for example, [Bernardo and Smith 2009], [Spiegelhalter 2024], [Skyrms 1984: ch.3], [Diaconis and Skyrms 2017: ch.7], [Jeffrey 2004]. It is safe to say, however, that it has not been widely accepted. Science textbooks generally ignore it. So does the excellent Stanford Encyclopedia entry on “Interpretations of Probability” [Hájek 2023].
-
1584125.318641
Traditional arguments against or in favor of continuity rely upon the presupposition that scientific theories can serve as markers of descriptive truth. I argue that such a notion of the term is misguided if we are concerned with the question of how our scientific schemes ought to develop . Instead, a reconstruction of the term involves identifying those concepts which guide the development from one successive scheme to the next and labelling those concepts with the status that they are continuous. I explicitly construct an example of this kind of continuity utilizing two formulations of Quantum Field Theory (QFT) and identify what persists from the standard formulation, beginning with an action, to the successive one, making use of spinor helicity variables. Three concepts persist which are responsible for supplying explicit constraints on our expressions which serve to match onto empirical predictions: Lorentz invariance, locality and unitarity. Further extensions of this kind of analysis to models beyond the physical sciences are proposed.
-
1584142.31865
The extravagances of quantum mechanics (QM) never fail to enrich daily the debate around natural philosophy. Entanglement, non-locality, collapse, many worlds, many minds, and subjectivism have challenged generations of thinkers. Its approach can perhaps be placed in the stream of quantum logic, in which the “strangeness” of quantum mechanics is “measured” through the violation of Bell’s inequalities and, from there, attempts an interpretative path that preserves realism yet ends up overturning it, restating the fundamental mechanisms of QM as a logical necessity for a strong realism.
-
1584159.31866
Quantum mechanics is a theory that is as effective as it is counterintuitive. While quantum practices operate impeccably, they compel us to embrace enigmatic phenomena like the collapse of the state vector and non-locality, thereby pushing us towards untenable ”hypotheses non fingo” stances. However, a century after its inception, we are presented with a promising interpretive key, intimated by Wheeler as early as 1974[ ]. The interpretative paradoxes of this theory might be resolved if we discern the relationship between logical undecidability and quantum undecidability. It will be demonstrated how both are intricately linked to an observer/observed relational issue, and how the idiosyncratic behaviours of quantum physics can be reconciled with the normative, following this path.
-
1584203.318669
In epidemiology, an effect of a dichotomous exposure on a dichotomous outcome is a comparison of risks between the exposed and the unexposed. Causally interpreted, this comparison is assumed to equal a comparison in counterfactual risks if, hypothetically, both exposure states were to occur at once for each subject (Hernán and Robins, 2020). These comparisons are summarized by effect measures like risk difference or risk ratio. Risk difference describes the additive influence of an exposure on an outcome, and is often called an absolute effect measure. Trials occasionally report the inverse of a risk difference, which can also be classified as an absolute measure, as inverting it again returns the risk difference. Measures like risk ratio, which describe a multiplier of risk, are called relative, or ratio measures.
-
1584221.318677
I argue that forays into history of science in Kuhn’s The Structure of Scientific Revolutions (1962/1996) are by and large instances of “Great Man” history of science. “Great Man” history is the idea that history is the biography of great men. The “Great Man” of science model not only excludes women and people of color from science but also suggests that only special, exceptional people can succeed in science. If this is correct, then Kuhn (1962/1996) fails to usher in a “historiographic revolution in the study of science” or a “new historiography” (Kuhn 1962/1996, 3), as the book purports to do. Instead, it merely perpetuates the defunct historiography of the “Great Man” of science.
-
1588515.318687
Suppose for simplicity that everyone is a good Bayesian and has the same priors for a hypothesis H, and also the same epistemic interests with respect to H. I now observe some evidence E relevant to H. My credence now diverges from everyone else’s, because I have new evidence. …
-
1588516.318697
Suppose that your priors for some hypothesis H are 3/4 while my priors for it are 1/2. I now find some piece of evidence E for H which raises my credence in H to 3/4 and would raise yours above 3/4. If my concern is for your epistemic good, should I reveal this evidence E? …
-
1600010.318707
Next month, I’m speaking at Natal-Con in Austin. The line-up is a who’s who of thinkers advocating more births: the Collinses, Lyman Stone, Catherine Pakaluk, Jonathan Anomaly, Razib Khan, Crémieux, Robin Hanson, and many more. …
-
1631477.318716
One of the solutions of the measurement problem is given by spontaneous localization theories, in which a non-linear and stochastic dynamics makes superpositions spontaneously and randomly ‘decay’ into well localized states. In this paper I discuss the original spontaneous localization theory as well as its subsequent refinements. Also, I present their possible ontologies and their relativistic extension, analyzing whether it is the case that spontaneous localization theories are more compatible with relativity than their alternatives. A notable feature of these approaches is that they make predictions which differ from the ones of axiomatic quantum theory, and thus they can be empirically tested. I conclude the paper by considering the problem of the tails and the question of whether GRW can provide a better ground for a statistical mechanical explanation of the phenomena.
-
1685559.318725
Jeremy Kuhn, Carlo Geraci, Philippe Schlenker, Brent Strickland. Boundaries in space and time: Iconic biases across modalities. Cognition, 2021, 210, pp.104596. �10.1016/j.cognition.2021.104596�.
-
1696233.318734
Bet On It reader Dan Barrett wrote these notes for his Book Nook book club on my Selfish Reasons to Have More Kids: Why Being a Great Parent Is Less Work and More Fun Than You Think. Dan’s idea:
I’m organizing reading groups packaged as the Book Nook to help colleagues (1) guide their own learning journeys, (2) connect with people they’d otherwise not meet, & (3) deepen their understanding of the Principles of Human Progress. …
-
1699549.318743
The inductive risk argument challenges the value-free ideal of science by asserting that scientists should manage the inductive risks involved in scientific inference through social values, which consists in weighing the social implications of errors when setting evidential thresholds. Most of the previous analyses of the argument fall short of engaging directly with its core assumptions, and thereby offer limited criticisms. This paper critically examines the two key premises of the inductive risk argument: the thesis of epistemic insufficiency, which asserts that the internal standards of science do not suffice to determine evidential thresholds in a non-arbitrary fashion, and the thesis of legitimate value-encroachment, which asserts that non-scientific value judgments can justifiably influence these thresholds. A critical examination of the first premise shows that the inductive risk argument does not pose a unique epistemic challenge beyond what is already implied by fallibilism about scientific knowledge, and fails because the mere assumption of fallibilism does not imply the untenability of value-freedom. This is demonstrated by showing that the way in which evidential thresholds are set in science is not arbitrary in any sense that would lend support to the inductive risk argument. A critical examination of the second premise shows that incorporating social values into scientific inference as an inductive risk-management strategy faces a meta-criterion problem, and consequently leads to several serious issues such as wishful thinking, category mistakes in decision making, or Mannheim-style paradoxes of justification. Consequently, value-laden strategies for inductive risk management in scientific inference would likely weaken the justification of scientific conclusions in most cases.
-
1699569.318755
Scientific principles can undergo various developments. While philosophers of science have acknowledged that such changes occur, there is no systematic account of the development of scientific principles. Here we propose a template for analyzing the development of scientific principles called the ‘life cycle’ of principles. It includes a series of processes that principles can go through: prehistory, elevation, formalization, generalization, and challenge. The life cycle, we argue, is a useful heuristic for the analysis of the development of scientific principles. We illustrate this by discussing examples from foundational physics including Lorentz invariance, Mach’s principle, the naturalness principle, and the perfect cosmological principle. We also explore two applications of the template. First, we propose that the template can be employed to diagnose the quality of scientific principles. Second, we discuss the ramifications of the life cycle’s processes for the empirical testability of principles.
-
1730339.318766
FAQ on Microsoft’s topological qubit thing
Q1. Did you see Microsoft’s announcement? A. Yes, thanks, you can stop emailing to ask! Microsoft’s Chetan Nayak was even kind enough to give me a personal briefing a few weeks ago. …
-
1757216.318774
In this paper, we provide a critical overview of Feyerabend’s unpublished manuscript “On the Responsibility of Scientists.” Specifically, we locate the paper within Feyerabend’s corpus and show how it relates to his published remarks on topics such as expertise, democracy and science, opportunism, science funding, and the value of scientific knowledge. We also show how Feyerabend’s views anticipate and point novel directions for contemporary philosophical literature on values in science.