-
21519.460787
Hypotheses about how and why animals behave the way they do are frequently labelled as either associative or cognitive. This has been taken as evidence that there is a fundamental distinction between two kinds of behavioural processes. However, there is significant disagreement about how to define this distinction whether it ought to be rejected entirely. Rather than seeking a definition of the associative-cognitive distinction, or advocating for its rejection, I argue that it is an artefact of the way that comparative psychologists generate hypotheses. I suggest that hypotheses for non-human animal behaviour are often generated by analogy with hypotheses drawn from human psychology and associative learning theory, a justifiable strategy since analogies help to establish the pursuit-worthiness of a hypothesis. Any apparent distinction is a misleading characterisation of what is a complex web of hypotheses that explain diverse behavioural phenomena. The analogy view of the distinction has three advantages. It motivates the apparent existence of the distinction based on a common inference strategy in science, analogical reasoning. It accounts for why the distinction has been difficult to articulate, because of the diversity of possible analogies. Finally, it delimits the role of the distinction in downstream inferences about animal behaviour.
-
54413.460976
This commentary aims to support Tim Crane’s account of the structure of intentionality by showing how intentional objects are naturalistically respectable, how they pair with concepts, and how they are to be held distinct from the referents of thought. Crane is right to reject the false dichotomy that accounts of intentionality must be either reductive and naturalistic or non-reductive and traffic in mystery. The paper has three main parts. First, I argue that the notion of an intentional object is a phenomenological one, meaning that it must be understood as an object of thought for a subject. Second, I explain how Mark Sainsbury’s Display Theory of Attitude Attribution pairs well with Crane’s notion of an intentional object and allows for precisification in intentional state attributions while both avoiding exotica and capturing the subject’s perspective on the world. Third, I explain the reification fallacy, the fallacy of examining intentional objects as if they exist independently of subjects and their conceptions of them. This work helps to bring out how intentionality can fit in the natural world while at the same time not reducing aboutness to some non-intentional properties of the natural world.
-
68912.460988
— We present a reformulation of the model predictive control problem using a Legendre basis. To do so, we use a Legendre representation both for prediction and optimization. For prediction, we use a neural network to approximate the dynamics by mapping a compressed Legendre representation of the control trajectory and initial conditions to the corresponding compressed state trajectory. We then reformulate the optimization problem in the Legendre domain and demonstrate methods for including optimization constraints. We present simulation results demonstrating that our implementation provides a speedup of 31-40 times for comparable or lower tracking errors with or without constraints on a benchmark task.
-
139976.460999
Modern generative AI systems have shown the capacity to produce remarkably fluent language, prompting debates both about their semantic understanding and, less prominently, about whether they can perform speech acts. This paper addresses the latter question, focusing on assertion. We argue that to be capable of assertion, an entity must meet two requirements: it must produce outputs with descriptive functions, and it must be capable of being sanctioned by agents with which it interacts. The second requirement arises from the nature of assertion as a norm-governed social practice. Pre-trained large language models that have not been subject to fine-tuning fail to meet the first requirement. Language models that have been fine-tuned for “groundedness” or “correctness” may meet the first requirement, but fail the second. We also consider the significance of the point that AI systems can be used to generate proxy assertions on behalf of human agents.
-
140067.461007
Suppose Socrates is looking at a bright red apple in good viewing conditions, so that it looks to him the colour it is. Schematically, Aristotle’s explanation of this “Good Case” is that the apple looks bright red to Socrates because he has taken on the perceptual form of bright red without the matter. But what happens if Socrates misperceives the apple instead and it looks purple? It is not at all clear how to apply Aristotle’s account of perception to such a “Bad Case.” Does Socrates still take on the perceptual form of the actual—bright red—colour of the apple in the Bad Case? Of purple? Neither? I argue that applying Aristotle’s account of perception to this sort of Bad Case requires that there are different ways of being in perceptual contact with perceptible qualities like the colour of an apple, depending on how that perceptual contact is mediated by changes in the sense organs and perceptual medium.
-
367883.461013
In this article, I develop the idea of theoretical complexes to characterize large-scale theoretical movements in the cognitive sciences, such as classical computational cognitivism, connectionism, embodied cognition, and predictive processing. It is argued that these theoretical movements should be construed as groups of closely connected individual theories and models of cognitive processes that share similar general hypotheses about the nature of cognition. General hypotheses form conceptual cores of complexes of cognitive theories, giving them their structure and functional properties. The latter are said to consist primarily of helping practitioners of theoretical complexes further develop their individual accounts of cognitive phenomena. It is claimed that the theoretical diversity fostered in this way has already benefited the cognitive sciences in a number of important ways and has the potential to further advance the field.
-
598589.461019
This article uses especially Sartre’s existential philosophy (also drawing from Scheler, Husserl, and Descartes) to investigate pathogenetic issues in psychopathology from a first-person perspective. Psychosis is a “total experience” that points to orientating changes in subjectivity, supported by evidence regarding self-disorders in the schizophrenia spectrum. This article proposes that schizophrenia is essentially characterized (and distinguished) by specific structural alterations of (inter)subjectivity around the relationship between self and Other, which all its seemingly disparate signs and symptoms eventually point to. Two reciprocal distortions are present in psychotic schizophrenia patients: (A) an encroaching and substantialized Other, and (B) a self transformed into being-for-the-Other. Under the altered conditions of (A & B), delusional mood is the presence but inaccessibility of the Other; a delusional perception is an eruption or surfacing of objectification of self by Other; a delusion is an experience of the Other, which fulfills certainty, incorrigibility, and potentially falsehood.
-
657814.461027
Sorry for the long blog-hiatus! I was completely occupied for weeks, teaching an intensive course on theoretical computer science to 11-year-olds (! ), at a math camp in St. Louis that was also attended by my 8-year-old son. …
-
712549.461035
philosophical logic may also interest themselves with the logical appendices, one of which presents modal logic as a subsystem of the logic of counterfactuals. Last but not least, the work also includes an afterword that is both a severe reprimand to the analytic community for a certain sloppiness and an exhortation to all colleagues to apply more rigor and patience in addressing metaphysical issues. People familiar with Williamson’s work will not be surprised by the careful and detailed (sometimes a bit technical) argumentation, which demands careful attention from the reader. As expected, this is a most relevant contribution to an increasingly popular topic by one of today’s leading analytic philosophers.
-
713941.461044
Cognitive neuroscientists typically posit representations which relate to various aspects of the world, which philosophers call representational content. Anti-realists about representational content argue that contents play no role in neuroscientific explanations of cognitive capacities. In this paper, I defend realism against an anti-realist argument due to Frances Egan, who argues that for content to be explanatory it must be both essential and naturalistic. I introduce a case study from cognitive neuroscience in which content is both essential and naturalistic, meeting Egan’s challenge. I then spell out some general principles for identifying studies in which content plays an explanatory role.
-
713987.46105
Representations appear to play a central role in cognitive science. Capacities such as face recognition are thought to be enabled by internal states or structures representing external items. However, despite the ubiquity of representational terminology in cognitive science, there is no explicit scientific theory outlining what makes an internal state a representation of an external item. Nonetheless, many philosophers hope to uncover an implicit theory in the scientific literature. This is the project of the current thesis. However, all such projects face an obstacle in the form of Frances Egan’s argument that content plays no role in scientific theorising. I respond that, in some limited regions of cognitive science, content is crucial for explanation. The unifying idea is that closer attention to the application of information theory in those regions of cognitive neuroscience enables us to uncover an implicit theory of content. I examine the conditions which must be met for the cognitive system to be modelled using information theory, presenting some constraints on how we apply the mathematical framework. For example, information theory requires identifying probability distributions over measurable outcomes, which leads us to focus specifically on neural representation. I then argue that functions are required to make tractable measures of information, since they serve to narrow the range of possible contents to those potentially explanatory of a cognitive capacity. However, unlike many other teleosemanticists, I argue that we need to use a non-etiological form of function. I consider whether non-etiological functions allow for misrepresentation, and conclude that they do. Finally, I introduce what I argue is the implicit theory of content in cognitive neuroscience: maxMI. The content of a representation is that item in the environment with which the representation shares maximal mutual information.
-
1056614.461055
Accuracy plays an important role in the deployment of machine learning algorithms. But accuracy is not the only epistemic property that matters. For instance, it is well-known that algorithms may perform accurately during their training phase but experience a significant drop in performance when deployed in real-world conditions. To address this gap, people have turned to the concept of algorithmic robustness. Roughly, robustness refers to an algorithm’s ability to maintain its performance across a range of real-world and hypothetical conditions. In this paper, we develop a rigorous account of algorithmic robustness grounded in Robert Nozick’s counterfactual sensitivity and adherence conditions for knowledge. By bridging insights from epistemology and machine learning, we offer a novel conceptualization of robustness that captures key instances of algorithmic brittleness while advancing discussions on reliable AI deployment. We also show how a sensitivity-based account of robustness provides notable advantages over related approaches to algorithmic brittleness, including causal and safety-based ones.
-
1117774.46106
Why are quantum correlations so puzzling? A standard answer is that they seem to require either nonlocal influences or conspiratorial coincidences. This suggests that by embracing nonlocal influences we can avoid conspiratorial fine-tuning. But that’s not entirely true. Recent work, leveraging the framework of graphical causal models, shows that even with nonlocal influences, a kind of fine-tuning is needed to recover quantum correlations. This fine-tuning arises because the world has to be just so as to disable the use of nonlocal influences to signal, as required by the no-signaling theorem. This places an extra burden on theories that posit nonlocal influences, such as Bohmian mechanics, of explaining why such influences are inaccessible to causal control. I argue that Everettian Quantum Mechanics suffers no such burden. Not only does it not posit nonlocal influences, it operates outside the causal models framework that was presupposed in raising the fine-tuning worry. Specifically, it represents subsystems with density matrices instead of random variables. This allows it to sidestep all the results (including EPR and Bell) that put quantum correlations in tension with causal models. However, this doesn’t mean one must abandon causal reasoning altogether in a quantum world. After all, quantum systems can clearly stand in causal relations. When decoherence is rampant and there’s no controlled entanglement, Everettian Quantum Mechanics licenses our continued use of standard causal models. When controlled entanglement is present—such as in Bell-type experiments—we can employ recently proposed quantum causal models that are consistent with Everettian Quantum Mechanics. We never need invoke any kind of nonlocal influence or any kind of fine-tuning.
-
1117827.461066
We take a fresh look at Daniel Dennett’s naturalist legacy in philosophy, focusing on his rethinking of philosophical methods. Critics sometimes mistake Dennett for promoting a crude naturalism or dismissing philosophical tools like first-person intuition. We present his approach as more methodologically radical, blending science and philosophy in a way that treats inquiry as an evolving process. Concepts and intuitions are tested and adjusted in light of empirical findings and broader epistemic aims. For Dennett, science isn’t a limitation on philosophy, but a tool that sharpens it, with empirical data helping to refine our understanding both of concepts and philosophical phenomena alike. By exploring Dennett’s methodological contributions, we underscore the ongoing importance of his naturalist perspective in today’s philosophical landscape.
-
1117896.461073
In this paper, we argue that a perceiver’s contributions to perception can substantially affect what objects are represented in perceptual experience. To capture the scalar nature of these perceiver-contingent contributions, we introduce three grades of subject-dependency in object perception. The first grade, “weak subject-dependency,” concerns attentional changes to perceptual content like, for instance, when a perceiver turns their head, plugs their ears, or primes their attention to a particular cue. The second grade, “moderate subject-dependency,” concerns changes in the contingent features of perceptual objects due to action-orientation, location, and agential interest. For instance, being to the right or left of an object will cause the object to have a corresponding locative feature, but that feature is non-essential to the object in question. Finally, the third grade, “strong subject-dependency,” concerns generating perceptual objects whose existence depends upon their perceivers’ sensory contributions to perception. For this final grade of subject-dependency the adaptive perceptual system shapes diverse representations of sensory information by contributing necessary features to perceptual objects. To exemplify this nonstandard form of object perception we offer evidence from the future-directed anticipation of perceptual experts, and from the feature binding of synesthetes. We conclude that strongly subject-dependent perceptual objects are more than mere material objects, but are rather a necessary combination of material objects with the contributions of a perceiving subject.
-
1233276.461079
This paper introduces "pseudo-consciousness" as a novel framework for understanding and classifying advanced artificial intelligence (AI) systems that exhibit sophisticated cognitive behaviors without possessing subjective awareness or sentience.
-
1665876.461085
This paper proposes that the evolution of consciousness can be partially understood through increasingly complex forms of exploration. We trace how features such as integration, intentionality, temporality, and valence evolved as functional tools for dealing with uncertainty and contradiction. Central to this process is a shift from implicit to explicit representation, which we relate to established models of consciousness levels. Our approach emphasizes structural and functional continuity between these levels, while avoiding sharp thresholds or binary distinctions. Understood as exploration, consciousness supports what Stegmaier (2019) calls orientation, the achievement of finding one’s way in a changing environment by establishing temporary relevance and stability in conditions of uncertainty. We argue that exploration provides a productive framework for understanding how conscious capacities developed in response to situational demands. The account further raises questions about the conditions under which synthetic systems might replicate conscious capacities, highlighting the role of affect, embodiment, and representational structure in the evolution of conscious cognition.
-
1665898.46109
This paper introduces the Representational Uncertainty Principle (RUP) as a structural account of the limits of representational precision. We argue that as representations become more narrowly defined—by fixing more internal structure—they constrain the integration of perceptual and contextual cues. This often suppresses representational flexibility: the capacity to draw on multiple situational cues to stabilize meaning. When this flexibility is reduced, representational diffraction becomes more prominent: a structural phenomenon in which aspects of a situation are subsumed under a representation that deviates from the expected or standard framing, resulting in ambiguity or tension. Drawing on a structural analogy with quantum mechanics, we treat interference and diffraction as complementary manifestations of how representational content is formed. This framework explains why overly precise representations often fail in contexts that demand sensitivity to subtle variations. We support this account through examples of conceptual ambiguity and apparent contradiction, and by developing a framework that distinguishes between the structuring role of the representational vehicle and the dynamic process of integration that gives rise to content. The RUP thus highlights a structural tension between abstraction, context sensitivity, and the need for orientation within experience.
-
1815321.461096
What is the relation between the phenomenal properties of experience and physical properties, such as physical properties of the brain? I evaluate the proposal that phenomenal properties are determinables of physical realizer determinates, focusing Jessica Wilson’s response to a prominent argument for thinking that phenomenal properties cannot be understood in this way. Wilson premises her response on the idea that phenomenal properties admit of physical determination dimensions, which can be discovered through the relevant sciences. I provide several reasons for questioning this way of understanding the relation between the phenomenal and the physical, centered on the idea that even if phenomenal properties have physical determination dimensions, it remains to be shown that these determine the physical realizers of phenomenal properties, and provide reasons for denying that this is the case. I then address Wilson’s “powers-based conception” of the determinable/ determinate relation and argue that it faces difficulties both independent from and in relation to the view of phenomenal properties as determinables of physical realizer determinates.
-
2093312.461101
With Matthew Adelstein’s kind permission, here’s the transcript of the Adelstein/Huemer conversation on the ethics of insect suffering. Lightly edited by me. 00:37:48 MATTHEW ADELSTEIN
Okay. So, yeah. …
-
2097877.461107
Predictive processing is an ambitious neurocomputational framework, offering an unified explanation of all cognitive processes in terms of a single computational operation, namely prediction error minimization. Whilst this ambitious unificatory claim has been thoroughly analyzed, less attention has been paid to what predictive processing entails for structure-function mappings in cognitive neuroscience. We argue that, taken at face value, predictive processing entails an all-to-one structure-function mapping, wherein each individual neural structure is assigned the same function, namely minimizing prediction error. Such a structure-function mapping, we show, is highly problematic. For, barring few, rare occasions, such a structure-function mapping fails to play the predictive, explanatory and heuristic roles structure-function mappings are expected to play in cognitive neuroscience. Worse still, it offers a picture of the brain that we know is wrong. For, it depicts the brain as an equipotential organ; an organ wherein structural differences do not correspond to any appreciable functional difference, and wherein each component can substitute for any other component without causing any loss or degradation of functionality. Somewhat ironically, the very neuroscientific roots of predictive processing motivate a form of skepticism concerning the framework’s most ambitious unificatory claims. Do these problems force us to abandon predictive processing? Not necessarily. For, once the assumption that all cognition can be accounted for exclusively in terms of prediction error minimization is relaxed, the problems we diagnosed lose their bite.
-
2097901.461113
A conditional argument is put forth suggesting that if qualia have a functional role in intelligence, then it might be possible, by observing the behavior of verbal AI systems like large language models (LLMs) or other architectures capable of verbal reasoning, to tackle in an empirical way the “strong AI” problem, namely, the possibility that AI systems have subjective experiences, or qualia. The basic premise is that if qualia are functional, and thus have causal roles, then they could affect the production of discourses about qualia and subjective consciousness in general. A thought experiment is put forth envisioning a possible method to probabilistically test the presence of qualia in AI systems based on this conditional argument. The method proposed in the thought experiment focuses on observing whether ideas related to the issue of phenomenal consciousness, such as the so-called “hard problem” of consciousness, or related philosophical issues centered on qualia, spontaneously emerge in extended dialogues involving LLMs specifically trained to be initially oblivious of such philosophical concept and related ones. By observing the emergence (or lack thereof) in the AI’s verbal production of discussions related to phenomenal consciousness in these contexts, the method seeks to provide empirical evidence for or against the existence of consciousness in AI. An outline of a Bayesian test of the hypothesis is provided. Three main investigative methods with different reliability and feasibility aimed at empirically detecting AI consciousness are proposed: one involving human interaction and two fully automated, consisting in multi-agent conversations between machines. The practical and philosophical challenges involved by the idea of transforming the proposed thought experiments into an actual empirical trial are then discussed. In light of these considerations, the proposal put forth in the paper appears to be at least a contribution to computational philosophy in the form of philosophical thought experiments focused on computational systems, aimed at refining our philosophical understanding of consciousness. Hopefully, it could also provide hints toward future empirical investigations into machine consciousness.
-
2788944.461119
Philosophers of mind call Hempel’s dilemma an argument by (Crane and Mellor, 1990; Melnyk, 1997) against metaphysical physicalism, the thesis that everything that exists is either ‘physical’ or ultimately depends on the ‘physical’. Their argument is understood as a challenge to the idea of fixing what is ‘physical’ by appealing to a theory of physics. The dilemma briefly goes as follows. On the one hand, if we choose a current theory of physics to fix what is ‘physical’, then, since our current theories of physics are very likely incomplete, the so-articulated metaphysical physicalism is very likely false. On the other hand, if we choose a future theory of physics to fix what is ‘physical’, then, since future theories of physics are currently unknown, the so-articulated metaphysical physicalism has indeterminate meaning. Thus, it seems we can rely neither on current nor on future theories of physics to satisfactorily articulate metaphysical physicalism. Recently (Firt et al., 2022) argued that the dilemma extends to any theory that gives a deep-structure and changeable account of experience (including dualistic theories, although cf. Buzaglo, 2024).
-
3197117.461125
Casajus (J Econ Theory 178, 2018, 105–123) provides a characterization of the class of positively weighted Shapley value for …nite games from an in…nite universe of players via three properties: e¢ ciency, the null player out property, and superweak differential marginality. The latter requires two players’payoffs to change in the same direction whenever only their joint productivity changes, that is, their individual productivities stay the same. Strengthening this property into (weak) differential marginality yields a characterization of the Shapley value. We suggest a relaxation of superweak differential marginality into two subproperties: (i) hyperweak differential marginality and (ii) superweak differential marginality for in…nite subdomains. The former (i) only rules out changes in the opposite direction. The latter (ii) requires changes in the same direction for players within certain in…nite subuniverses. Together with e¢ ciency and the null player out property, these properties characterize the class of weighted Shapley values.
-
3220863.46113
A common criticism of medicine is that there is too much focus on treating symptoms instead of patients. This criticism and its sentiment – among other factors – have motivated many ‘humanistic,’ ‘holistic,’ and ‘non-reductionist’ approaches to medicine including the biopsychosocial model, patient-centered medicine, ‘gentle’ medicine, and others. Much has been said detailing and defending these approaches. My aim here is not to further defend one or any of these. Rather, my aim is to better understand what is at the heart of the ‘common criticism,’ i.e., that treating symptoms – not patients – is bad. What does this mean? Are symptoms not something patients have? By treating symptoms, do clinicians not necessarily treat the patients that have them?
-
3455854.461139
The puzzle of aphantasia concerns how individuals reporting no visual imagery perform more-or-less normally on tasks presumed to depend on it [1]. In his splendid recent review in TiCS, Zeman [2] canvasses four ‘cognitive explanations’: (i) differences in description; (ii) ‘faulty introspection’; (iii) “unconscious or ‘sub-personal’ imagery”; and (iv) total lack of imagery. Difficulties beset all four. To make progress, we must recognize that imagery is a complex and multidimensional capacity and that aphantasia commonly reflects partial imagery loss with selective sparing. Specifically, I propose that aphantasia often involves a lack of visual-object imagery (explaining subjective reports and objective correlates) but selectively spared spatial imagery (explaining Some researchers have suggested that aphantasics may have failed to follow instructions or engage imagery [7]. This is unconvincing. In studies of galvanic skin responses, trials were excluded in which subjects failed to demonstrate ‘proper reading and comprehension’ of the frightening stories. Thus, it remains a mystery why spontaneous imagery did not emerge [6]. Similarly, in studies of pupillary light responses, aphantasics showed a characteristic in-task correlation between pupil and stimulus set size, indicating that they were not “‘refusing’ to actively participate…due to…a belief that they are unable to imagine” [5]. Aphantasics also do voluntarily form images in other tasks despite a lack of incentives [8].
-
3566139.461145
Key elements of the recent dialectic surrounding the hole argument in the philosophy of general relativity are clarified by close attendance to the nature of scientific representation. I argue that a structuralist account of representation renders the purported haecceitistic differences between target systems irrelevant to the representational role of models of general relativity. Framing the hole argument in this way helps resolve the impasse in the literature between Weatherall and Pooley and Read.
-
3907205.461151
I have long believed in what philosophers call “libertarian free will.” This isn’t about political philosophy, but philosophy of mind. Holding all physical conditions constant, determinism holds that there is exactly one thing that I can do. …
-
3939341.461156
Even with everything happening in the Middle East right now, even with (relatedly) everything happening in my own family (my wife and son sheltering in Tel Aviv as Iranian missiles rained down), even with all the rather ill-timed travel I’ve found myself doing as these events unfolded (Ecuador and the Galapagos and now STOC’2025 in Prague) … there’s been another thing, a huge one, weighing on my soul. …
-
4566446.461161
There is large consensus across clinical research that feelings of worthlessness (FOW) are one of the highest risk factors for a patient’s depression becoming suicidal. In this paper, I attempt to make sense of this empirical relationship from a phenomenological perspective. I propose that there are purely reactive and pervasive forms of FOW. Subsequently, I present a phenomenological demonstration for how and why it is pervasive FOW that pose a direct suicidal threat. I then outline criteria, contingent upon empirical verification, by which clinicians can more confidently identify when a patient’s FOW place them at high risk of suicide.