-
66725.27698
It has been argued that inductive underdetermination entails that machine learning algorithms must be value-laden. This paper offers a more precise account of what it would mean for a “machine learning algorithm” to be “value-laden,” and, building on this, argues that a general argument from underdetermination does not warrant this conclusion.
-
124479.277066
In this transparently organized and argued book, Bird defends two main theses: that the aim of science is production of (scienti c) knowledge, and that even moderate empiricism is an incorrect account of the epistemology of science. The two theses are directly logically related by his account of evidence. Evidence, he maintains, is whatever can be used as a sound inferential basis for knowledge; and, at least in contemporary science that relies on sophisticated instruments, automated analysis, and distributed processing across specialist authors, this basis seldom if ever includes reports of anyone’s sense perceptions.
-
236920.27709
We humans think a lot about agency – about what people do, about what they can do, and what they ought to do. I want to highlight four puzzles raised by the way we tend to approach these questions. None of the puzzles is new, but they are usually discussed in isolation; I will argue that they have a common source and a common solution. The first puzzle, to be discussed in sections 2 and 3, arises from two features of the “perspectival ‘ought’ ”. On the one hand, the perspectival ‘ought’ appears to supervene on the agent’s perspective or evidence. On the other hand, this sense of ‘ought’ seems to imply ‘can’. But couldn’t an agent lack information about what they can do?
-
354385.277102
The standard theory of choice in economics involves modelling human agents as if they had precise attitudes when in fact they are often fuzzy. For the normative purposes of welfare economics, it might be thought that the imposition of a precise framework is nevertheless well justified: If we think the standard theory is normatively correct, and therefore that agents ought to be in this sense precise, then doesn’t it follow that their true welfare can be measured precisely? I will argue that this thought, central to the preference purification project in behavioural welfare economics, commits a fallacy. The standard theory requires agents to adopt precise preferences; but neither the theory nor a fuzzy agent’s initial attitudes may determine a particular way in which she ought to precisify them. So before actually having precisified her preferences, the welfare of fuzzy agents may remain indeterminate. I go on to consider the implications of this fallacy for welfare economics.
-
354405.277114
The idea that people make mistakes in how they pursue their own best interests, and that we can identify and correct for these mistakes has been central to much recent work in behavioural economics, and the ‘nudge’ approach to public policy grounded on it. The focus in this literature has been on individual choices that are mistaken. Agreeing with, and building on the criticism that this literature has been too quick to identify individual choices as mistaken, I argue that it has also overlooked a kind of mistake that is potentially more significant: irreducibly diachronic mistakes, which occur when series of choices over time do not serve our interests well, even though no individual choice can be identified as a mistake. I argue for the claim that people make such mistakes, and reflect on its significance for welfare economics.
-
354748.277123
In her Choosing Well, Chrisoula Andreou puts forth an account of instrumental rationality that is revisionary in two respects. First, it changes the goalpost or standard of instrumental rationality to include “categorial” appraisal responses, alongside preferences, which are relational. Second, her account is explicitly diachronic, applying to series of choices as well as isolated ones. Andreou takes both revisions to be necessary for dealing with problematic choice scenarios agents with disorderly preferences might find themselves in. Focusing on problem cases involving cyclical preferences, I will first argue that her first revision is undermotivated once we accept the second. If we are willing to grant that there are diachronic rationality constraints, the preference-based picture can get us further than Andreou acknowledges. I will then turn to present additional grounds for rejecting the preference-based picture. However, these grounds also seem to undermine Andreou’s own appeal to categorial appraisal responses.
-
467170.277133
Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.
-
467187.277142
Franke, in Philosophy & Technology, 37(1), 1–6, (2024), connects the recent debate about manipulative algorithmic transparency with the concerns about problematic pursuits of positive liberty. I argue that the indifference view of manipulative transparency is not aligned with positive liberty, contrary to Franke’s claim, and even if it is, it is not aligned with the risk that many have attributed to pursuits of positive liberty. Moreover, I suggest that Franke’s worry may generalise beyond the manipulative transparency debate to AI ethics in general.
-
643543.277151
Sleeping Beauty, the renowned Bayesian reasoner, has enrolled in an experiment at the Experimental Philosophy Lab. On Sunday evening, she is put to sleep. On Monday, the experimenters awaken her. After a short chat, the experimenters tell her that it is Monday. She is then put to sleep again, and her memories of everything that happened on Monday are erased. The experimenters then toss a coin. If and only if the coin lands tails, the experimenters awaken her again on Tuesday. Beauty is told all this on Sunday. When she awakens on Monday – unsure of what day it is – what should her credence be that the coin toss on Monday lands heads?
-
643653.277161
Large language models (LLMs) such as OpenAI’s ChatGPT reflect, and can potentially perpetuate, social biases in language use. Conceptual engineering aims to revise our concepts to eliminate such bias. We show how machine learning and conceptual engineering can be fruitfully brought together to offer new insights to both conceptual engineers and LLM designers. Specifically, we suggest that LLMs can be used to detect and expose bias in the prototypes associated with concepts, and that LLM de-biasing can serve conceptual engineering projects that aim to revise such conceptual prototypes. At present, these de-biasing techniques primarily involve approaches requiring bespoke interventions based on choices of the algorithm’s designers. Thus, conceptual engineering through de-biasing will include making choices about what kind of normative training an LLM should receive, especially with respect to different notions of bias. This offers a new perspective on what conceptual engineering involves and how it can be implemented. And our conceptual engineering approach also offers insight, to those engaged in LLM de-biasing, into the normative distinctions that are needed for that work.
-
691903.27717
It is conventional wisdom that appreciating the role of luck in our moral lives should make us more sparing with blame. But views of moral responsibility that allow luck to augment a person’s blameworthiness are in tension with this wisdom. I resolve this tension: our common moral luck partially generates a duty to forgo retributively blaming the blameworthy person at least sometimes. So, although luck can amplify the blame that a person deserves, luck also partially generates a duty not to give the blameworthy person the retributive blame that he deserves at least sometimes.
-
758909.277179
In this paper, we present an agent-based model for studying the impact of ‘myside bias’ on the argumentative dynamics in scientific communities. Recent insights in cognitive science suggest that scientific reasoning is influenced by ‘myside bias’. This bias manifests as a tendency to prioritize the search and generation of arguments that support one’s views rather than arguments that undermine them. Additionally, individuals tend to apply more critical scrutiny to opposing stances than to their own. Although myside bias may pull individual scientists away from the truth, its effects on communities of reasoners remain unclear. The aim of our model is two-fold: first, to study the argumentative dynamics generated by myside bias, and second, to explore which mechanisms may act as a mitigating factor against its pernicious effects. Our results indicate that biased communities are epistemically less successful than non-biased ones, and that they also tend to be less polarized than non-biased ones. Moreover, we find that two socio-epistemic mechanisms help communities to mitigate the effect of the bias: the presence of a common filter on weak arguments, which can be interpreted as shared beliefs, and an equal distribution of agents for each alternative at the start of the scientific debate.
-
758931.277188
Apologies serve important moral and social functions such as expressing remorse, taking responsibility, and repairing trusting relationships. LLM-based chatbots routinely produce output which has the linguistic form of an apology. However, chatbots are not the kind of linguistic or moral agents that could perform any of the functions listed above. KEYWORDS: chatbots, apologies, large language models, bullshit Especially since the release of ChatGPT in late 2022, there has been a furor about chatbots powered by Large Language Models (LLMs). Much of the concern has been directed at the problem of hallucination or confabulation, the tendency of chatbots to produce outputs which look like assertions but which have no connection to the truth. It is common to suggest that the output of chatbots is bullshit in the somewhat technical sense defined by Harry Frankfurt. Chatbot outputs which are not declarative sentences have received less attention. Our focus here is on apologies.
-
910807.277197
Comonotonicity (“same variation”) of random variables minimizes hedging possibilities and has been widely used, e.g., in Gilboa and Schmeidler’s ambiguity models. This paper investigates anticomonotonicity (“opposite variation”; abbreviated “AC”), the natural counterpart to comonotonicity. It minimizes leveraging rather than hedging possibilities. Surprisingly, AC restrictions of several traditional axioms do not give new models. Instead, they strengthen the foundations of existing classical models: (a) linear functionals through Cauchy’s equation; (b) Anscombe-Aumann expected utility; (c) as-if-risk-neutral pricing through no-arbitrage; (d) de Finetti’s bookmaking foundation of Bayesianism using subjective probabilities; (e) risk aversion in Savage’s subjective expected utility. In each case, our generalizations show where the critical tests of classical axioms lie: in the AC cases (maximal hedges). We next present examples where AC restrictions do essentially weaken existing axioms, and do provide new properties and new models.
-
922585.277205
In the semantic debate about perspectival expressions—predicates of taste, aesthetic and moral terms, epistemic modals, etc.—intuitions about armchair scenarios (e.g., disagreement, retraction) have played a crucial role. More recently, various experimental studies have been conducted, both in relation to disagreement (e.g., Cova, 2012; Foushee and Srinivasan, 2017; Solt, 2018) and retraction (e.g., Knobe and Yalcin, 2014; Khoo, 2018; Beddor and Egan, 2018; Dinges and Zakkou, 2020; Kneer 2021; 2022; Almagro, Bordonaba Plou, and Villanueva, 2023; Marques, 2024), with the aim of establishing a more solid foundation for semantic theorizing. Both these types of data have been used to argue for or against certain views (e.g., contextualism, relativism). In this talk, I discern a common thread in the use of these data and argue for two claims: (i) which perspective is adopted by those judging the armchair scenarios put forward and by the participants in experimental studies crucially matters for the viability of the intended results; (ii) failure to properly attend to this puts recent experimental work at risk. Finally, I consider the case of cross-linguistic disagreement and retraction and assess their importance for the semantic debate about perspectival expressions, as well as for the claim that perspective matters in putting forward the data on which decisions about the right semantic view are made.
-
931868.277214
This paper interrogates the concept of luck in cancer diagnosis. I argue that while it might have some utility for individuals, at the clinical and research level, the concept impedes important prevention efforts and misdirects sources of blame in a cancer diagnosis. Such use, in fact, has the possibility of harming already vulnerable efforts at ameliorating social determinants of health and should therefore be eliminated from research and clinical contexts.
-
988090.277242
We set up a general framework for higher order probabilities. A simple HOP (Higher Order Probability space) consists of a probability space and an operation PR, such that, for every event A and every real closed interval A, PR(A ,A) is the event that A’s "true" probability lies in A. (The "true" probability can be construed here either as the objective probability, or the probability assigned by an expert, or the one assigned eventually in a fuller state of knowledge.) In a general HOP the operation PR has also an additional argument ranging over an ordered set of time-points, or, more generally, over a partially ordered set of stages; PR({A,t,A) is the event that A's probability at stage ¢ lies in 4. First we investigate simple HOPs and then the general ones. Assuming some intuitively justified axioms, we derive the most general structure of such a space. We also indicate various connections with modal logic.
-
1162816.277251
Is philosophy of science best carried out at a ne-grained level, focusing on the theories and methods of individual sciences? Or is there still room for a general philosophy of science, for the study of philosophical questions about science as such? For Samuel Schindler, the answer to the last question is a resounding ‘yes!’, and his book Theoretical Virtues in Science is an unapologetic attempt to grapple with what he regards as three key questions for philosophy of science-in-general: What are the features—the virtues—that characterize good scienti c theories? What role do these virtues play in scienti c inquiry? And what do they allow us, as philosophers, to conclude about reality?
-
1162844.277263
The historical sciences appear to present a challenge for mainstream views about the epistemology of science, which have largely been developed with the physical sciences in mind. While debates over realism about microphysical entities still continue, what are we to make of the epistemic situation of historical scientists? The objects of their investigation, namely, historical entities and processes, are, like microphysical entities, not directly observable, but unlike microphysical entities, they are unmanipulable. As Derek Turner ([2007]) has argued, this appears to put historical scientists in a worse situation, epistemically, than microphysicists. But most philosophers (I presume) would not want to be anti-realists about the entities and processes of the past. What, then, is the proper attitude we should have towards the historical sciences? And might thinking about this question provide us with insights that we could direct back towards more traditional debates about the epistemology of science?
-
1213702.277274
Expected value maximization gives plausible guidance for moral decision-making under uncertainty in many situations. But it has extremely unappetizing implications in ‘Pascalian’ situations involving tiny probabilities of extreme outcomes. This paper shows, first, that under realistic levels of ‘background uncertainty’ about sources of value independent of one’s present choice, a widely accepted and apparently innocuous principle—stochastic dominance—requires that prospects be ranked by the expected value of their consequences in most ordinary choice situations. But second, this implication does not hold when differences in expected value are driven by tiny probabilities of extreme outcomes. Stochastic dominance therefore lets us draw a surprisingly principled line between ‘ordinary’ and ‘Pascalian’ situations, providing a powerful justification for de facto expected value maximization in the former context while permitting deviations in the latter. Drawing this distinction is incompatible with an in-principle commitment to maximizing expected value, but does not require too much departure from decision-theoretic orthodoxy: it is compatible, for instance, with the view that moral agents must maximize the expectation of a utility function that is an increasing function of moral value.
-
1247564.277288
In 1543, Nicolaus Copernicus published a book arguing that the Earth revolves around the Sun: De revolutionibus orbium coelestium. This is sometimes painted as a sudden triumph of rationality over the foolish yet long-standing belief that the Sun and all the planets revolve around the Earth. …
-
1336191.277298
The much-debated Reflection principle states that a coherent agent’s credences must match their estimates for their future credences. Defenders claim that there are Dutch-book arguments in its favor, putting it on the same normative footing as probabilistic coherence. Critics claim that those arguments rely on the implicit, implausible assumption that the agent is introspective : that they are certain what their own credences are. In this paper, we clarify this debate by surveying several different conceptions of the book scenario. We show that the crucial disagreement hinges on whether agents who are not introspective are known to reliably act on their credences: if they are, then coherent Reflection failures are (at best) ephemeral; if they aren’t, then Reflection failures can be robust— and perhaps rational and coherent. We argue that the crucial question for future debates is which notion of coherence makes sense for such unreliable agents, and sketch a few avenues to explore.
-
1451529.277309
How do social factors affect group learning in diverse populations? Evidence from cognitive science gives us some insight into this question, but is generally limited to showing how social factors play out in small groups over short time periods. To study larger groups and longer time periods, we argue that we can combine evidence about social factors from cognitive science with agent-based models of group learning. In this vein, we demonstrate the usefulness of idealized models of inquiry, in which the assumption of Bayesian agents is used to isolate and explore the impact of social factors. We show that whether a certain social factor is beneficial to the community’s epistemic aims depends on its particular manifestation by focusing on the impacts of homophily – the tendency of individuals to associate with similar others – on group inquiry.
-
1503051.27732
Thinking about Statistics by Jun Otsuka is a fine book, engagingly written and full of interesting details. Its subtitle, The Philosophical Foundations, might give the idea that we are dealing with philosophy of statistics, but the author makes clear that this would be a mistake: his aim is not to cover the “wealth of discussions concerning the theoretical ground of inductive inference, interpretations of probability, the everlasting battle between Bayesian and frequentist statistics, and so forth” (3). Nor is the book meant as an introduction, be it to statistics or to philosophy (ibid.), even though it contains lucid expositions on p-values, confidence levels, and significance tests, as well as instructive explanations of philosophical positions concerning probabilistic inference.
-
1509399.27733
According to Sayre’s law, ‘academic politics are so vicious precisely because the stakes are so small’. Though I anticipate the usual grousing over whether, strictly speaking, this really quali es as a law, the last decade of dispute between armchair and naturalistic metaphysicians would nevertheless seem to con rm it. While all parties can at least agree that metaphysics can constitute a meaningful activity ‘if done properly’, the latter faction have alleged, in no uncertain terms, that analytic metaphysicians are going about it the wrong way. Since at least the publication of Every Thing Must Go, we metaphysicians of science have claimed, both in print and in private, that analytic metaphysics is ‘irrelevant’, ‘frivolous’, ‘pseudoscienti c’, ‘sterile or even empty’, and overall the embarrassing uncle of an otherwise functional philosophical family. Ourmetaphysics, by contrast, is appropriately ‘continuous with’, ‘informed by’, and ‘sensitive to’ science, whose garments, it is taken to be generally understood, we are each obliged to fumble. As such, only ours exhibits the right balance of a prioriand empirical content to be broadly deserving of intellectual respect.
-
1509425.277341
Scienti c collaboration is taking place with increasing frequency, at least since the Manhattan project. Globalization and rapid advancement in communication technologies have made easier national and international inquiry across di erent scienti c disciplines. Boyer-Kassem, Mayo-Wilson, and Weisberg have collected eleven chapters that address conceptual and normative issues about collaborative research and ensuing collective knowledge in the sciences. These issues are clustered around four core topics, each forming one part of the book: (i) information sharing among scientists, (ii) the reasons and strategies for (fruitful) collaboration, (iii) challenges, in terms of accountability, to the ordinary notions of authorship and refereeing, and (iv) the relationship between individual and group opinions in social decision-making problems. Most of the authors employ formal tools (mathematical models, computer simulations) to discuss and analyse di erent aspects of the dynamics of scienti c communities and collaborative research. Here, I focus on the notable contributions of each chapter.
-
1509482.277353
The role of ethical, political, social, and other non-epistemic values in science has recently emerged as a mainstream topic within the philosophy of science. Articles on science and values appear regularly in prominent philosophy of science journals, and the last few meetings of the Philosophy of Science Association (PSA) have included multiple sessions on values in science.
-
1561379.277364
Stefan Riedener, Uncertain Values: An Axiomatic Approach to Axiological Uncertainty, De Gruyter, 2021, 167pp, $16.99, ISBN 978-3-11-073957-2. Stefan Riedener’s book is concerned with axiological uncertainty — that is, the problem of how to evaluate prospects given uncertainty about what is the correct axiology. For evaluations of this kind of meta value, Riedener uses the term ‘?-value’ (3). The main goal of the book is to provide an axiomatic argument for Expected Value Maximization, which is the view that an option ? has an at least as great?-value as an option ? if and only if ? has a greater expected value than ?, where the expected value of an option is a sum of the value of the option on each axiology weighted by one’s credence in the axiology (5).
-
1576294.277375
In recent years, there has been a proliferation of competing conceptions of what it means for a predictive algorithm to treat its subjects fairly. Most approaches focus on explicating a notion of group fairness, i.e. of what it means for an algorithm to treat one group unfairly in comparison to another. In contrast, Dwork et al. (2012) attempt to carve out a formalised conception of individual fairness, i.e. of what it means for an algorithm to treat an individual fairly or unfairly. In this paper, I demonstrate that the conception of individual fairness advocated by Dwork et al. is closely related to a criterion of group fairness, called ‘base rate tracking’, introduced in Eva (2022). I subsequently show that base rate tracking solves some fundamental conceptual problems associated with the Lipschitz criterion, before arguing that group level fairness criteria are at least as powerful as their individual level counterparts when it comes to diagnosing algorithmic bias.
-
1624887.277385
I introduce a novel method for evaluating counterfactuals. According to the branchpoint proposal, counterfactuals are evaluated by ‘rewinding’ the universe to a time at which the antecedent had a reasonable probability of coming about and considering the probability for the consequent, given the antecedent. This method avoids surprising dynamics, allows the time of the branchpoint to be determined by the system’s dynamics (rather than by context) and uses scientific posits to specify the relevant probabilities. I then show how the branchpoint proposal can be justified by considering an evidential role for counterfactuals: counterfactuals help us reason about the probabilistic relations that hold in a hypothetical scenario at which the antecedent is maximally unsettled. A result is that we should distinguish the use of counterfactuals in contexts of control from their use for reasoning evidentially. Standard Lewisian accounts run into trouble precisely by expecting a single relation to play both roles.