1. 202921.660621
    Cumulative cultural knowledge [CCK], the knowledge we acquire via social learning and has been refined by previous generations, is of central importance to our species’ flourishing. Considering its importance, we should expect that our best epistemological theories can account for how this happens. Perhaps surprisingly, CCK and how we acquire it via cultural learning has only received little attention from social epistemologists. Here, I focus on how we should epistemically evaluate how agents acquire CCK. After sampling some reasons why extant theories cannot account for CCK, I suggest that things aren’t as bleak as they might look. I explain how agents deserve epistemic credit for how CCK is transmitted in cultural learning by promoting a central need of their social group: The efficient and safe transmission of CCK. A good initial fit exists between this observation and Greco’s knowledge-economy framework. Ultimately, however, Greco’s account doesn’t straightforwardly account for CCK because of its strict focus on testimony. I point out two issues in the framework due to this focus. The resulting view advocates giving epistemic credit to agents when they act to promote their communities’ epistemic needs in the right way and highlights the various ways in which agents come to do this.
    Found 2 days, 8 hours ago on PhilSci Archive
  2. 206085.660872
    We propose a framework for the analysis of choice behaviour when the latter is made explicitly in chronological order. We relate this framework to the traditional choice theoretic setting from which the chronological aspect is absent, and compare it to other frameworks that extend this traditional setting. Then, we use this framework to analyse various models of preference discovery. We characterise, via simple revealed preference tests, several models that differ in terms of (i) the priors that the decision-maker holds about alternatives and (ii) whether the decision-maker chooses period by period or uses her knowledge about future menus to inform her present choices. These results provide novel testable implications for the preference discovery process of myopic and forward-looking agents.
    Found 2 days, 9 hours ago on Nobuyuki Hanaki's site
  3. 237166.660891
    Consumption decisions are partly influenced by values and ideologies. Consumers care about global warming, child labor, fair trade, etc. We develop an axiomatic model of intrinsic values – those that are carriers of meaning in and of themselves – and argue that they often introduce discontinuities near zero. For example, a vegetarian’s preferences would be discontinuous near zero amount of animal meat. We distinguish intrinsic values from instrumental ones, which are means rather than ends and serve as proxies for intrinsic values. We illustrate the relevance of our value-based model in different contexts, including equity concerns and prosocial behavior.
    Found 2 days, 17 hours ago on Itzhak Gilboa's site
  4. 376219.660916
    Our universe seems to be miraculously fine-tuned for life. Multiverse theories have been proposed as an explanation for this on the basis of probabilistic arguments, but various authors have objected that we should consider our total evidence that this universe in particular has life in our inference, which would block the argument. The debate thus crucially hinges on how Bayesian background and evidence are distinguished and on how indexical or demonstrative terms are analysed. The aim of this article is to take a step back and examine these various aspects of Bayesian reasoning and how they affect the arguments. The upshot is that there are reasons to resist the fine-tuning argument for the multiverse, but the “this-universe-objection” is not one of them.
    Found 4 days, 8 hours ago on PhilSci Archive
  5. 376306.66093
    Scientific hedges are communicative devices used to qualify and weaken scientific claims. Gregor Betz (2013) has argued – unconvincingly, we think – that hedging can rescue the value-free ideal for science. Nevertheless, Betz is onto something when he suggests there are political principles that recommend scientists hedge public-facing claims. In this paper, we recast this suggestion using the notion of public justification. We formulate and reject a Rawlsian argument that locates the justification for hedging in its ability to forge consensus.
    Found 4 days, 8 hours ago on PhilSci Archive
  6. 376329.660945
    The underreporting of suspected adverse drug reactions remains a primary issue for contemporary post-market drug surveillance or ‘pharmacovigilance.’ Pharmacovigilance pioneer W.H.W. Inman argued that ‘deadly sins’ committed by clinicians are to blame for underreporting. Of these ‘sins,’ ignorance and lethargy are the most obvious and impactful in causing underreporting. However, recent analyses show that diffidence, insecurity, and indifference additionally play a major role. I aim to augment our understanding of diffidence, insecurity, and indifference by arguing these sins are underwritten by value judgments arising via epistemic risk. I contend that ‘evidence-based’ medicine codifies these sins.
    Found 4 days, 8 hours ago on PhilSci Archive
  7. 495867.660955
    According to reductivist axiological perfectionism about well-being (RAP), well-being is constituted by the development and exercise of central human capacities. In defending this view, proponents have relied heavily on the claim that RAP provides a unifying explanation of the entries on the ‘objective list’ of well-being constituents. I argue that this argument fails to provide independent support for the theory. RAP does not render a plausible objective list unless such a list is used at every stage of theory development to shape the details of the view. Absent such motivated fine-tuning, RAP even fails to provide a satisfying account of two supposed paradigm cases of perfectionist value: achievement and knowledge. Thus, if RAP is to be defended, it must be defended directly by providing reasons for accepting the axiological principle at its heart. It cannot be defended, indirectly, by pointing to its attractive implications.
    Found 5 days, 17 hours ago on Ergo
  8. 495896.660966
    A specter is haunting economics—the specter of revealed preference theory. Many philosophers of old have entered into an alliance to exorcise this specter; Sen (1977) and Hausman (2012), Dietrich and List (2016), and Guala (2012; 2019). In the face of the trenchant critique it has faced, the longevity of revealed preference theory is quite surprising. While it still holds considerable power among economists, in recent years also philosophers have begun to offer novel arguments in its defense (e.g., Vredenburgh 2020; Clarke 2020; Thoma 2021a; 2021b). At its core, revealed preference theory can be stated as the view that preferences are just patterns in choice-behavior. My aim in this paper is to argue against the revival of revealed preference theory. Towards this end, I will first outline the different facets of revealed preference theory (Section 2). I will then briefly present the two most common arguments that philosophers of economics have offered against it. In particular, I will look at the argument from belief and the argument from causality (Section 3).
    Found 5 days, 17 hours ago on Ergo
  9. 495948.660977
    Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say ‘yes’, but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say ‘no’. This distinction appears to be practically important: among other things, additive axiologies generally assign great importance to large changes in population size, and therefore tend to strongly prioritize the long-term survival of humanity over the interests of the present generation. Non-additive axiologies, on the other hand, need not assign great importance to large changes in population size. We show, however, that when there is a large enough ‘background population’ unaffected by our choices, a wide range of non-additive axiologies converge in their implications with additive axiologies—for instance, average utilitarianism converges with critical-level utilitarianism and various egalitarian theories converge with prioritarianism. We further argue that real-world background populations may be large enough to make these limit results practically significant. This means that arguments from the scale of potential future populations for the astronomical importance of avoiding existential catastrophe, and other arguments in practical ethics that seem to presuppose additive separability, may succeed in practice whether or not we accept additive separability as a basic axiological principle.
    Found 5 days, 17 hours ago on Ergo
  10. 545080.660988
    I want to comment on an old objection to the “similarity analysis” of counterfactuals, and on a more recent, but related, argument for counterfactual skepticism. According to the similarity analysis, a counterfactual ? > ? is true iff ? is true at all ? worlds that are most similar, in certain respects, to the actual world. The old objection that I have in mind is that the similarity analysis fails to validate Simplification of Disjunctive Antecedents (SDA), the inference from (? ∨ ?) > ? to ? > ? and ? > ?. Imagine someone utters (1a) on a hot summer day.
    Found 6 days, 7 hours ago on Wolfgang Schwarz's site
  11. 769924.660999
    I propose an approach to liar and Curry paradoxes inspired by the work of Roger Swyneshed in his treatise on insolubles (1330-1335). The keystone of the account is the idea that liar sentences and their ilk are false (and only false) and that the so-called “capture” direction of the T-schema should be restricted. The proposed account retains what I take to be the attractive features of Swyneshed’s approach without leading to some worrying consequences Swyneshed accepts. The approach and the resulting logic (called “Swynish Logic”) are non-classical, but are consistent and compatible with many elements of the classical picture including modus ponens, modus tollens, and double-negation elimination and introduction. It is also compatible with bivalence and contravalence. My approach to these paradoxes is also immune to an important kind of revenge challenge that plagues some of its rivals.
    Found 1 week, 1 day ago on PhilPapers
  12. 807147.661009
    Naïve Instrumentalists are practically unconstrained in pursuit of their moral or political goals. If it seems to them, just based on the immediately legible evidence, that violence or deception would advance their goals, they won’t hesitate to act accordingly. …
    Found 1 week, 2 days ago on Good Thoughts
  13. 813577.66102
    Leddington (2016) remains the leading contemporary philosophical account of magic, one that has been relatively unchallenged. In this discussion piece, I have three aims; namely, to (i) criticise Leddington’s attempt to explain the experience of magic in terms of belief-discordant alief; (ii) explore the possibility that much, if not all, of the experience of magic can be explained by mundane belief-discordant perception; and (iii) argue that make-believe is crucial to successful performances of magic in ways Leddington at best overlooks and at worst denies.
    Found 1 week, 2 days ago on D. Cavedon-Taylor's site
  14. 827706.66103
    I reconstruct J. H. Lambert’s views on how practical grounds relate to epistemic features, such as certainty. I argue, first, that Lambert’s account of moral certainty does not involve any distinctively practical influence on theoretical belief. However, it does present an interesting form of fallibilism about justification as well as a denial of a tight link between knowledge and action. Second, I argue that for Lambert, the persistence principle that underwrites induction is supported by practical reasons to believe; this indicates that Lambert is a moderate pragmatist about reasons for theoretical belief.
    Found 1 week, 2 days ago on PhilPapers
  15. 882454.661041
    Social learning is a collective approach to decentralised decision-making and is comprised of two processes; evidence updating and belief fusion. In this paper we propose a social learning model in which agents’ beliefs are represented by a set of possible states, and where the evidence collected can vary in its level of imprecision. We investigate this model using multi-agent and multi-robot simulations and demonstrate that it is robust to imprecise evidence. Our results also show that certain kinds of imprecise evidence can enhance the efficacy of the learning process in the presence of sensor errors.
    Found 1 week, 3 days ago on J. Lawry's site
  16. 885463.661052
    An act type is something that an agent can do: walk to the store, climb Mount Everest, trip over a wire. Act types are ‘repeatables’: many have climbed Mount Everest. Act types are not events. If you climb Everest, an event occurs—your cold, brutal climb—but this event is not what you do. What you do is climb Everest.
    Found 1 week, 3 days ago on PhilPapers
  17. 885512.661062
    Pascal’s Wager involves expected utilities. In this chapter, we examine the Wager in light of two main features of expected utility theory: utilities and probabilities. We discuss infinite and finite utilities, and zero, infinitesimal, extremely low, imprecise, and undefined probabilities. These have all come up in recent literature regarding Pascal’s Wager. We consider the problems each creates and suggest prospects for the Wager in light of these problems.
    Found 1 week, 3 days ago on PhilPapers
  18. 885540.661073
    This paper begins by applying a version of Timothy Williamson’s anti-luminosity argument to normative properties. This argument suggests that there must be at least some unknowable normative facts in normative Sorites sequences, or otherwise we get a contradiction given certain plausible assumptions concerning safety requirements on knowledge and our doxastic dispositions. This paper then focuses on the question of how the defenders of different forms of metaethical anti-realism (namely, error theorists, subjectivists, relativists, contextualists, expressivists, response dependence theorists, and constructivists) could respond to the explanatory challenge created by the previous argument. It argues that, with two exceptions, the metaethical anti-realists need not challenge the argument itself, but rather they can find ways to explain how the unknowable normative facts can obtain. These explanations are based on the idea that our own attitudes on which the normative facts are grounded need not be transparent to us either. Reaching this conclusion also illuminates how metaethical anti-realists can make sense of instances of normative vagueness more generally.
    Found 1 week, 3 days ago on PhilPapers
  19. 885594.661083
    A famous mathematical theorem says that the sum of an infinite series of numbers can depend on the order in which those numbers occur. Suppose we interpret the numbers in such a series as representing instances of some physical quantity, such as the weights of a collection of items. The mathematics seems to lead to the result that the weight of a collection of items can depend on the order in which those items are weighed. But that is very hard to believe. A puzzle then arises: How do we interpret the metaphysical significance of this mathematical theorem? I first argue that prior solutions to the puzzle lead to implausible consequences. Then I develop my own solution, where the basic idea is that the weight of a collection of items is equal to the limit of the weights of its finite subcollections contained within ever-expanding regions of space. I show how my solution is intuitively plausible and philosophically motivated, how it reveals an underexplored line of metaphysical inquiry about quantities and locations, and how it elucidates some classic puzzles concerning supertasks.
    Found 1 week, 3 days ago on PhilPapers
  20. 885649.661093
    What does ‘Smith knows that it might be raining’ mean? Expressivism here faces a challenge, as its basic forms entail a pernicious type of transparency, according to which ‘Smith knows that it might be raining’ is equivalent to ‘it is consistent with everything that Smith knows that it is raining’ or ‘Smith doesn’t know that it isn’t raining’. Pernicious transparency has direct counterexamples and undermines vanilla principles of epistemic logic, such as that knowledge entails true belief and that something can be true without one knowing it might be. I re-frame the challenge in precise terms and propose a novel expressivist formal semantics that meets it by exploiting (i) the topic-sensitivity and fragmentation of knowledge and belief states and (ii) the apparent context-sensitivity of epistemic modality. The resulting form of assertibility semantics advances the state of the art for state-based bilateral semantics by combining attitude reports with context-sensitive modal claims, while evading various objectionable features. In appendices, I compare the proposed system to Beddor and Goldstein’s ‘safety semantics’ and discuss its analysis of a modal Gettier case due to Moss.
    Found 1 week, 3 days ago on PhilPapers
  21. 888581.661109
    Suppose that we have n objects α1, ..., αn, and we want to define something like numerical values (at least hyperreal ones, if we can’t have real ones) on the basis of comparisons of value. Here is one interesting way to proceed. …
    Found 1 week, 3 days ago on Alexander Pruss's Blog
  22. 943369.661122
    The safety conception of knowledge holds that a belief constitutes knowledge iff relevantly similar beliefs—its epistemic counterparts—are true. It promises an instructive account of why certain general principles of knowledge hold. We focus on two such principles that anyone should endorse: the closure principle that knowledge is downward closed under competent conjunction elimination, and the counter-closure principle that knowledge is upward closed under competent conjunction introduction. We argue that anyone endorsing the former must also endorse the latter on pains of an unacceptable form of bootstrapping. We devise new formal models to identify necessary and sufficient conditions for these principles to hold on conceptions that construe knowledge in terms of true counterparts. These conditions state that counterparts of premise and conclusion beliefs are coordinated in certain ways whenever these beliefs stand in the relevant inferential relations. We show that the safety conception faces insuperable problems vindicating these coordination principles, because its epistemic counterpart relation is symmetric. We conclude that it thus proves unable to account for minimal closure properties of knowledge. More generally, our formal results establish parameters within which any conception must operate that construes knowledge in terms of true counterparts.
    Found 1 week, 3 days ago on PhilPapers
  23. 1006277.661133
    The goal of this form of politics is the manufacturing and maintaining of 'pluralistic ignorance' where members of a group mistakenly believe that most other members disagree with them. As a result, a well-organised minority is able to dominate the group as a whole by convincing them of a fictitious shared consensus supporting their rule or values. …
    Found 1 week, 4 days ago on The Philosopher's Beard
  24. 1074377.661145
    Experimental philosophy of explanation rising. The case for a plurality of concepts of explanation This paper brings together results from the philosophy and the psychology of explanation in order to argue that there are multiple concepts of explanation in human psychology. Specifically, it is shown that pluralism about explanation coheres with the multiplicity of models of explanation available in the philosophy of science, and is supported by evidence from the psychology of explanatory judgment. Focusing on the case of a norm of explanatory power, the paper concludes by responding to the worry that if there is a plurality of concepts of explanation, one will not be able to normatively evaluate what counts as good explanation.
    Found 1 week, 5 days ago on Matteo Colombo's site
  25. 1074379.661159
    Scientific and ordinary understanding of human social behaviour assumes that the Humean theory of motivation is true. The present chapter explores whether and in which sense the Humean theory of motivation may be true in the light of recent empirical and theoretical work in the computational neuroscience of social motivation. It is argued that the Humean theory is false, if an increasingly popular model in computational neuroscience turns out to be correct. According to this model, brains are probabilistic prediction machines, whose function is to minimize the uncertainty about their sensory exchanges with the environment. If brains are these kinds of machines, then we should reconceive the nature of social motivation without appealing to desire. We should rather focus our attention on how social motivation is biased towards reduction of social uncertainty, and on how social norms and other social institutions function as uncertainty minimizing devices.
    Found 1 week, 5 days ago on Matteo Colombo's site
  26. 1074514.661174
    I recently discussed my “make desertion fast” proposal (updated here) with philosopher Ned Dobos over lunch. Though he’s sympathetic, he’s sent me the following two emails outlining possible objections. …
    Found 1 week, 5 days ago on Bet On It
  27. 1116532.661187
    Theories of graded causation attract growing attention in the philosophical debate on causation. An important field of application is the controversial relationship between causation and moral responsibility. However, it is still unclear how exactly the notion of graded causation should be understood in the context of moral responsibility. One question is whether we should endorse a proportionality principle, according to which the degree of an agent’s moral responsibility is proportionate to their degree of causal contribution. A second question is whether a theory of graded causation should measure closeness to necessity or closeness to sufficiency. In this paper, we argue that we should indeed endorse a proportionality principle and that this principle supports a notion of graded causation relying on closeness to sufficiency rather than closeness to necessity. Furthermore, we argue that this insight helps to provide a plausible analysis of the so-called ‘Moral Difference Puzzle’ recently described by Bernstein.
    Found 1 week, 5 days ago on PhilPapers
  28. 1126406.661198
    We show that knowledge satisfies interpersonal independence, meaning that a non-trivial sentence describing one agent’s knowledge cannot be equivalent to a sentence describing another agent’s knowledge. The same property of interpersonal independence holds, mutatis mutandis, for belief. In the case of knowledge, interpersonal independence is implied by the fact that there are no non-trivial sentences that are common knowledge in every model of knowledge. In the case of belief, interpersonal independence follows from a strong interpersonal independence that knowledge does not have. Specifically, there is no sentence describing the beliefs of one person that implies a sentence describing the beliefs of another person.
    Found 1 week, 6 days ago on PhilSci Archive
  29. 1126568.661216
    I distinguish between pure self-locating credences and superficially self-locating credences, and argue that there is never any rationally compelling way to assign pure self-locating credences. I first argue that from a practical point of view, pure self-locating credences simply encode our pragmatic goals, and thus pragmatic rationality does not dictate how they must be set. I then use considerations motivated by Bertrand’s paradox to argue that the indifference principle and other popular constraints on self-locating credences fail to be a priori principles of epistemic rationality, and I critique some approaches to deriving self-locating credences based on analogies to non-self-locating cases. Finally, I consider the implications of this conclusion for various applications of self-locating probabilities in scientific contexts, arguing that it may undermine certain kinds of reasoning about multiverses, the simulation hypothesis, Boltzmann brains and vast-world scenarios.
    Found 1 week, 6 days ago on PhilSci Archive
  30. 1126750.661228
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. Idealizations–the intentional distortions introduced to scientific theories and models–are commonplace in the natural sciences and are seen as a successful scientific tool. Thus, it is not falsehood qua falsehood that is the issue. In this paper, I outline the need for xAI research to engage in idealization evaluation. Drawing on the use of idealizations in the natural sciences and philosophy of science,
    Found 1 week, 6 days ago on PhilSci Archive