-
1640520.077857
Homology is a fundamental but controversial concept in biology, referring to the sameness of biological characters across organisms. Despite its crucial role, its ontological nature has been a subject of intense debate, with a dichotomy between individualist and natural kind views. This study proposes a category-theoretic framework to reconcile these views by emphasizing the processual nature of homology. We first review major philosophical views of homology with their respective advantages and disadvantages. Next, we highlight the dynamic and evolving nature of homologs through two thought experiments. Through mathematical formulation, we then show that the individualist and natural kind views represent ordered set- and groupoid-like aspects, derived from a primary category-theoretical model based on a process-first dynamic view of homology. Our model covers a wide range of phenomena linked with homology, such as atavism, deep homology, and developmental system drift (DSD). Furthermore, it provides a unified perspective on the ontological nature of homology, overcoming the longstanding dichotomy between individuals and kinds in Western philosophy.
-
1640538.078006
Pseudo-consciousness bridges the gap between rigid, task-driven AI and the elusive dream of true artificial general intelligence (AGI). While modern AI excels in pattern recognition, strategic reasoning, and multimodal integration, it remains fundamentally devoid of subjective experience. Yet, emerging architectures are displaying behaviors that look intentional—adapting, self-monitoring, and making complex decisions in ways that mimic conscious cognition. If these systems can integrate information globally, reflect on their own processes, and operate with apparent goal-directed behavior, do they qualify as functionally conscious? This paper introduces pseudo-consciousness as a new conceptual category, distinct from both narrow AI and AGI. It presents a five-condition framework that defines AI capable of consciousness-like functionality without true sentience. By drawing on insights from computational theory of mind, functionalism, and neuroscientific models—such as Global Workspace Theory and Recurrent Processing Theory—we argue that intelligence and experience can be decoupled. The implications are profound. As AI systems become more autonomous and embedded in critical domains like healthcare, governance, and warfare, their ability to simulate awareness raises urgent ethical and regulatory concerns. Could a pseudo-conscious AI be trusted? Would it manipulate human perception? How do we prevent society from anthropomorphizing machines that only imitate cognition? By redefining the boundaries of intelligence and agency, this study lays the foundation for evaluating, designing, and governing AI that seems aware—without ever truly being so.
-
1640557.07802
In the philosophical debate about scientific progress, several authors appeal to a distinction between what constitutes scientific progress and what promotes it (e.g., Bird, 2008; Rowbottom, 2008; Dellsén, 2016). However, the extant literature is almost completely silent on what exactly it is for scientific progress to be promoted. Here I provide a precise account of progress promotion on which it consists, roughly, in increasing expected progress. This account may be combined with any of the major theories of what constitutes scientific progress, such as the truthlikeness, problem-solving, epistemic, and noetic accounts. However, I will also suggest that once we have this account of progress promotion up and running, some accounts of what constitutes progress become harder to motivate by the sorts of considerations often adduced in their favor, while others turn out to be easier to defend against common objections.
-
1640576.07803
Despite widespread scientific agreement that human biological diversity is real, the question of whether “race” corresponds to a natural kind remains deeply contested. While some philosophers and scientists continue to explore ways of biologically grounding racial categories, this paper argues that the project of racial naturalism—whether in its essentialist or reformulated variants—remains conceptually, empirically, and metaphysically untenable. Yet this is not a rejection of the reality of race. Rather, I contend that race is a real and powerful social construct, historically forged and materially entrenched, but not a natural kind in the biological or taxonomic sense.
-
1643740.078039
This paper argues that lockdown was racist. The terms are broad, but the task of definition is not random, and in §2 we motivate certain definitions as appropriate. In brief: “lockdown” refers to regulatory responses to the Covid-19 (C-19) pandemic involving significant restrictions on leaving the home and on activities outside the home, historically situated in the pandemic and widely known as “lockdowns”; and “racist” indicates what we call negligent racism, a type of racism which we define. Negligent racism does not require intent, but beyond this constraint, we do not endorse any definition of racism in general. With definitions in hand, in §3 we argue that lockdown was harmful in Africa, causing great human suffering that was not offset by benefits and amounted to net harm, far greater than in the circumstances in which most White people live. Since 1.4
-
1643757.078047
This paper argues against the view, proposed in Langland-Hassan (2020), that attitudinal imaginings are reducible to basic folk-psychological attitudes such as judgments, beliefs, desires, decisions, or combinations thereof. The proposed reduction fails because attitudinal imaginings, though similar to basic attitudes in certain respects, function differently than basic attitudes. I demonstrate this by exploring two types of cases: spontaneous imaginings, and imaginings that arise in response to fiction, showing that in these cases, imaginings cannot be identified with basic attitudes. I conclude that imagining is a distinct attitude: it enables us to freely conjure up scenarios without being bound by the restrictions that govern basic folk-psychological attitudes.
-
1643773.078055
According to the desire-satisfaction theory of welfare, something is good for me to the extent that I desire it. This theory faces the “scope problem”: many of the things I desire, intuitively, lie beyond the scope of my welfare. Here, I argue that a simple solution to this problem is available. First, I suggest that it is a general feature of desires that they can differ not only in their objects but also in their “targets,” or for the sake of whom one has the desire. For example, I can desire that my child win an award either for their sake or for my own sake. Second, I show that we can use this idea to solve the scope problem by holding that something is good for me to the extent that I desire it for my own sake. Despite first appearances, this solution is not ad hoc, incomplete, or circular.
-
1643790.078064
Agents are said to be “clueless” if they are unable to predict some ethically important consequences of their actions. Some philosophers have argued that such “cluelessness’’ is widespread and creates problems for certain approaches to ethics. According to Hilary Greaves, a particularly problematic type of cluelessness, namely, “complex” cluelessness, affects attempts to do good as effectively as possible, as suggested by proponents of “Effective Altruism,” because we are typically clueless about the long-term consequences of such interventions. As a reaction, she suggests focusing on interventions that are long-term oriented from the start. This paper argues for three claims: first, that David Lewis’ distinction between sensitive and insensitive causation can help us better understand the differences between genuinely “complex” and more harmless “simple” cluelessness; second, that Greaves’ worry about complex cluelessness can be mitigated for attempts to do near-term good; and, third, that Greaves’ recommendation to focus on long term-oriented interventions in response to complex cluelessness is not promising as a strategy specifically for avoiding complex cluelessness. There are systematic reasons why the actual effects of serious attempts to beneficially shape the long-term future are inherently difficult to predict and why, hence, such attempts are prone to backfiring.
-
1643811.078072
Inrecentwork,Adlam(2022b),ChenandGoldstein(2022),andMeacham(2023)have defended account so flaws that take laws to be primitive global constraints.Amajor advantage of these account sis that they’re able to accommodate the many different kind so flaws that appear in physical theories.InthispaperI’llpresentthesethree accounts,highlight their distinguishing features,and note some key differences that might lea done to favor one of these accounts over the others.I’llconcludebybriefly discussingaversionofa“constraint”ac co untthatIthinkis especially attractive.
-
1643828.07808
I argue that there are Kantian grounds to endorse a Universal Basic Income (UBI) and that Kant’s practical philosophy can contribute to current debates about the ethics of UBI. I will make two points that mutually support each other. Firstly, there is a pro tanto argument for Kantians to work towards a UBI. A UBI, more so than conditional welfare schemes, enables agents to live up to their duty to be a useful member of the world. This should be conceptualized as an indirect duty to implement a UBI. Secondly, Kant’s ethics suggests a way to tackle the most pressing ethical objection against a UBI, the unfairness or surfer objection. The requirement that agents be useful for others is ethical and thus cannot be enforced externally. Yet, there is rational pressure on agents to do their part. Kant and UBI advocates can learn a great deal from each other.
-
1643850.078088
Inphilosophyofscience,constitutive explanation shave attracted much attention sinceCraver’sinfluentialbookExplaining the Brain(2007).HisMutualManipulability(MM)theory of constitution aimed to explicate constitution as anon-causal explanatory relation and to demarcate between constituent sand non-constituents. But MM received decisive criticism.Inresponse,Craveretal.(2021)haverecently proposedanewtheory,called Matched Inter level Experiments(MIE),whichis currently gaining traction in various fields. The authors claim that MIE retains “the spirit of MM without conceptual confusion.”Our paper argues that this claim isnotborneout:neitherdoesMIEmeetMM’sob jectivesnorisit free of conceptual confusion.Atthesametime,we show that it is possible to meet MM’sobjectivesin aconceptuallysoundmanner—byadoptingtheso-calledNoDe-Couplingtheory ofconstitution.
-
1643877.078096
LOGOS Research Group in Analytic Philosophy, Universitat Autònoma de Barcelona Perception is said to have assertoric force: It inclines the perceiver to believe its content. In contrast, perceptual imagination is commonly taken to be non-assertoric: Imagining winning a piano contest does not incline the imaginer to believe they actually won. However, abundant evidence from clinical and experimental psychology shows that imagination influences attitudes and behavior in ways similar to perceptual experiences. To account for these phenomena, I propose that perceptual imaginings have implicit assertoric force and put forth a theory—the Prima Facie View—as a unified explanation for the empirical findings reviewed. According to this view, mental images are treated as percepts in operations involving associative memory. Finally, I address alternative explanations that could account for the reviewed empirical evidence—such as a Spinozian model of belief formation or Gendler’s notion of alief—as well as potential objections to the Prima Facie View.
-
1643919.078104
Detecting introspective errors about consciousness presents challenges that are widely supposed to be difficult, if not impossible, to overcome. This is a problem for consciousness science because many central questions turn on when and to what extent we should trust subjects’ introspective reports. This has led some authors to suggest that we should abandon introspection as a source of evidence when constructing a science of consciousness. Others have concluded that central questions in consciousness science cannot be answered via empirical investigation. I argue that on closer inspection, the challenges associated with detecting introspective errors can be overcome. I demonstrate how natural kind reasoning—the iterative application of inference to the best explanation to home in on and leverage regularities in nature—can allow us to detect introspective errors even in difficult cases such as judgments about mental imagery, and I conclude that worries about intractable methodological challenges in consciousness science are misguided.
-
1747206.078112
Philosophers have struggled to explain the mismatch of emotions and their objects across time, as when we stop grieving or feeling angry despite the persistence of the underlying cause. I argue for a sceptical approach that says that these emotional changes often lack rational fit. The key observation is that our emotions must periodically reset for purely functional reasons that have nothing to do with fit. I compare this account to David Hume’s sceptical approach in matters of belief, and conclude that resistance to it rests on a confusion similar to one that he identifies.
-
1773034.07813
1. Milton’s final work, Samson Agonistes, is built on an historical and aesthetic foundation many layers deep—as one might expect from this poet: ancient greek tragedy; the Aristotelian theory of tragedy it inspired; the Biblical story of Samson, of which this is a transformational re-telling; and Samson’s place in the larger history of Israel. …
-
1784675.078138
When I was trying to work out my intuitions about causal paradoxes of infinity, which eventually led to my formulating the thesis of causal finitism (CF)—that nothing can have an infinite causal history—I toyed with views that involved information. …
-
1846923.078146
If this all sounds feasible, or even fun, then I’m afraid my description has been misleading. The description in question is by Sally Rooney, in an essay in the New York Review of Books, and it’s a description of playing snooker, aimed at an American audience familiar with the exponentially easier game of pool. …
-
1871126.078154
Certain considerations from cosmology (Ellis 2006, 2014) and other areas of physics (Sklar 1990; Frisch 2004) pose challenges to the traditional distinction between laws and initial conditions, indicating the need for a more nuanced understanding of physical modality. A solution to these challenges is provided by presenting a conceptual framework according to which laws and fundamental lawlike assumptions within a theory’s nomic structure determine what is physically necessary and what is physically contingent from a physical theory’s point of view. Initial conditions are defined within this framework in terms of the possible configurations of a physical system allowed by the laws and other law-like assumptions of a theory. The proposed deflationary framework of physical modality offers an alternative way of understanding the distinction between laws and initial conditions and allows the question of the modal status of the initial conditions of the Universe to be asked in a meaningful way.
-
1871147.078166
This paper explores the implications of the Extended Mind Hypothesis (ExM), as introduced by Andy Clark and David Chalmers in 1998. Focusing on cognitive integration and the Trust and Glue criteria, we have two objectives. First, we examine how ExM reshapes perspectives on human-AI interaction, particularly by challenging the Standard model of AI. We argue that AI, as an active non-organic agent, significantly influences cognitive processes beyond initial expectations. Secondly, we propose the maintainability as a fourth criterion in the Trust and Glue criteria of ExM, in addition to trustworthiness, reliability and accessibility.
-
1911616.078175
We characterize Martin-Lof randomness and Schnorr randomness in terms of the merging of opinions, along the lines of the Blackwell-Dubins Theorem [BD62]. After setting up a general framework for defining notions of merging randomness, we focus on finite horizon events, that is, on weak merging in the sense of Kalai-Lehrer [KL94]. In contrast to Blackwell-Dubins and Kalai-Lehrer, we consider not only the total variational distance but also the Hellinger distance and the Kullback-Leibler divergence. Our main result is a characterization of Martin-Lof randomness and Schnorr randomness in terms of weak merging and the summable Kullback-Leibler divergence. The main proof idea is that the Kullback-Leibler divergence between µ and ν, at a given stage of the learning process, is exactly the incremental growth, at that stage, of the predictable process of the Doob decomposition of the ν-submartingale L(σ) = − ln µ(σ) ν(σ) . These characterizations of algorithmic randomness notions in terms of the Kullback-Leibler divergence can be viewed as global analogues of Vovk’s theorem [Vov87] on what transpires locally with individual Martin- Lof µ- and ν-random points and the Hellinger distance between µ, ν.
-
1958144.078183
How is it that individuals who deny experiencing visual imagery nonetheless perform normally on tasks which seem to require it? This puzzle of aphantasia has perplexed philosophers and scientists since the late nineteenth century. Contemporary responses include: (i) idiosyncratic reporting, (ii) faulty introspection, (iii) unconscious imagery, and (iv) complete lack of imagery combined with the use of alternative strategies. None offers a satisfying explanation of the full range of first-person, behavioural and physiological data. Here, I diagnose the puzzle of aphantasia as arising from the mistaken assumption that variation in imagery is well-captured by a single ‘vividness’ scale. Breaking with this assumption, I defend an alternative account which elegantly accommodates all the data. Crucial to this account is a fundamental distinction between visual-object and spatial imagery. Armed with this distinction, I argue that subjective reports and objective measures only testify to the absence of visual-object imagery, whereas imagery task performance is explained by preserved spatial imagery which goes unreported on standard ‘vividness’ questionnaires. More generally, I propose that aphantasia be thought of on analogy with agnosia, as a generic label for a range of imagery deficits with corresponding sparing.
-
1981354.078192
(This is part of The Girls Who Went Away: Poems. Note that poetry is best read on a larger screen.) When I get pregnant, it’s my seventeenth Birthday. He’s Cape Verdean Portuguese And thirty-one years old. …
-
1986455.078201
Titelbaum (2012) introduced a variant of the Sleeping Beauty problem in which a coin is tossed on both Monday and Tuesday, with the Tuesday toss not affecting Beauty’s condition. Titelbaum argues that double halfers are committed to the embarrassing position that Beauty’s credence that today’s coin toss lands heads is greater than 1/2. Pust ( ) agrees with the result, but argues that it is not a distinctive embarrassment for halfers. I argue that thirders need not be embarrassed. Double halfers, on the other hand, must hold that Beauty’s evidence is admissible for direct inference with respect to Monday’s coin toss, but not with respect to today’s coin toss. This is embarrassing because (1) a plausible argument exists for the opposite position, and (2) the position conflicts with the central motivation guiding double halfism.
-
2035377.078209
Philosophy and Society Vol. 30, No. 4, 463–644 Mega-Labs is a Challenging Task That Requires a Combination of Case-Based and Formal Epistemic Approaches. Data-Driven Studies Suggest That Projects Pursued by Smaller Master-Teams (Fewer Members, Fewer Sub-Teams) Are Substantially More Efficient Than Larger Ones Across Sciences, Including Experimental Particle Physics. Smaller Teams Also Seem to Make Better Project Choices Than Larger, Centralized Teams. Yet the Epistemic Requirement of Small, Decentralized, and Diverse Teams Contradicts the Often Emphasized and Allegedly Inescapable Logic of Discovery That Forces Physicists Pursuing the Fundamental Levels of the Physical World to Perform Centralized Experiments in Mega-Labs at High Energies. We Explain, However, That This Epistemic Requirement Could Be Met, Since the Nature of Theoretical and Physical Constraints in High Energy Physics and the Technological Obstacles Stemming From Them Turn Out to Be Surprisingly Open-Ended.
-
2044107.078222
The AdS/CFT correspondence posits a holographic equivalence between a gravitational theory in Anti-de Sitter (AdS) spacetime and a conformal field theory (CFT) on its boundary, linked by gauge-invariant quantities like field strengths Fµν and fluxes Φ. This paper examines that link, drawing on my prior analysis of the Aharonov-Bohm (AB) effect, where such quantities exhibit nonlocality, discontinuity, and incompleteness. I demonstrate that gauge potentials Aµ in the Lorenz gauge—not their invariant derivatives—mediate the AB effect’s local, continuous dynamics, a reality extending to gravitational fields gµν as substantival entities. In AdS/CFT, the CFT’s reduction of bulk Aµ and gµν to gauge-invariant imprints fails to reflect this ontology, a flaw so fundamental that it excludes exact gauge/gravity duality—neither standard mappings nor reformulations suffice. A new mathematical proof formalizes this: the bulk’s diffeomorphism freedom cannot correspond to the boundary’s gauge freedoms, Abelian or non-Abelian, under this reality. This critique spans the gauge/gravity paradigm broadly, from AdS/CFT to holographic QCD, where symmetry invisibility obscures bulk physics. While duality’s successes in black hole thermodynamics and strongly coupled systems highlight its utility, I suggest these reflect approximations within specific regimes, not a full equivalence. I propose a shift toward a framework prioritizing Aµ and gµν ’s roles, with gravitational AB effects in AdS as a testing ground. This work seeks to enrich holography’s dialogue, advancing a potential-centric view for quantum gravity.
-
2044126.078231
In this brief article I respond to Seifert’s recent views on the periodic law and the periodic table in connection with the views of philosophers regarding laws of nature. I argue that the author makes some factual as well as conceptual errors which are in conflict with some generally held views regarding the periodic law and the periodic table.
-
2044150.078241
The article sets out to clarify a number of confusions that exist in connection with the Born-Oppenheimer approximation (BOA). It is generally claimed that chemistry cannot be reduced to quantum mechanics because of the nature of this commonly used approximation in quantum chemistry, that is popularly believed to require a ‘clamping’ of the nuclei. It is also claimed that the notion of molecular structure, which is so central to chemistry, cannot be recovered from the quantum mechanical description of molecules and that it must be imposed by hand through the BOA. Such an alleged failure of reduction is then taken to open the door to concepts such as emergence and downward causation.
-
2044170.078255
It has been argued that, in scientific observations, the theory of the observed source should not be involved in the observation process to avoid circular reasoning and ensure reliable inferences. However, the issue of underdetermination of the source has been largely overlooked. I argue that concerns about circularity in inferring the source stem from the hypothetico-deductive (H-D) method. The epistemic threat, if any, arises not from the theory-laden nature of observation but from the underdetermination of the source by the data, since the data could be explained by proposing incompatible sources for it. Overcoming this under-determination is key to reliably inferring the source. I propose a bidirectional version of inference to the only explanation as a methodological framework that addresses this challenge while circumventing concerns about theory-ladenness. Nevertheless, fully justifying the viability of the background theoretical framework and its accurate description of the source requires a broader conception of evidence. To this end, I argue that integrating meta-empirical assessment into inference to the only explanation offers a promising strategy, extending the concept of evidence in a justifiable manner.
-
2044187.078269
Large Language Models (LLMs) increasingly produce outputs that resemble introspection, including self-reference, epistemic modulation, and claims about internal states. This study investigates whether such behaviors display consistent patterns across repeated prompts or reflect surface-level generative artifacts. We evaluated five open-weight, stateless LLMs using a structured battery of 21 introspective prompts, each repeated ten times, yielding 1,050 completions. These outputs are analyzed across three behavioral dimensions: surface-level similarity (via token overlap), semantic coherence (via sentence embeddings), and inferential consistency (via natural language inference). Although some models demonstrate localized thematic stability—especially in identity - and consciousness-related prompts—none sustain diachronic coherence.
-
2044206.078285
The quantum measurement problem is one of the most profound challenges in modern physics, questioning how and why the wavefunction collapses during measurement to produce a single observable outcome. In this paper, we propose a novel solution through a logical framework called Aethic reasoning, which reinterprets the ontology of time and information in quantum mechanics. Central to this approach is the Aethic principle of extrusion, which models wavefunction collapse as progression along a Markov chain of block universes, effectively decoupling the Einsteinian flow of time from quantum collapse events. This principle introduces an additional degree of freedom to time, enabling the first Aethic postulate: that informational reality is reference-dependent, akin to the relativity of simultaneity in special relativity. This reference point, or Aethus, is rigorously defined within a mathematical structure. Building on this foundation, the second postulate resolves the distinction between quantum superpositions and logical contradictions by encoding superpositions in a “backend” Aethic framework before rendering observable states. The third postulate further distinguishes quantum coherence from decoherence using a two-generational model of state inheritance, potentially advancing beyond simpler interpretations of information leakage. Together, these postulates yield a direct theoretical derivation of the collapse postulate, fully consistent with empirical results such as the outcome of the double-slit experiment. By addressing foundational aspects of quantum mechanics through a logically robust and philosophically grounded lens, this framework sheds new light on the measurement problem and offers a solid foundation for future exploration.