. As part of the week of recognizing R.A.Fisher (February 17, 1890 – July 29, 1962), I reblog a guest post by Stephen Senn from 2012/2017. The comments from 2017 lead to a troubling issue that I will bring up in the comments today. …
There is a vast literature that seeks to uncover features underlying moral judgment by eliciting reactions to hypothetical scenarios such as trolley problems. These thought experiments assume that participants accept the outcomes stipulated in the scenarios. Across seven studies (N = 968), we demonstrate that intuition overrides stipulated outcomes even when participants are explicitly told that an action will result in a particular outcome. Participants instead substitute their own estimates of the probability of outcomes for stipulated outcomes, and these probability estimates in turn influence moral judgments. Our findings demonstrate that intuitive likelihoods are one critical factor in moral judgment, one that is not suspended even in moral dilemmas that explicitly stipulate outcomes. Features thought to underlie moral reasoning, such as intention, may operate, in part, by affecting the intuitive likelihood of outcomes, and, problematically, moral differences between scenarios may be confounded with non-moral intuitive probabilities.
It is often said that ‘what it is like’-knowledge cannot be acquired by consulting testimony or reading books [Lewis 1998; Paul 2014; 2015a]. However, people also routinely consult books like What It Is Like to Go to War [Marlantes 2014], and countless ‘what it is like’ articles and youtube videos, in the apparent hope of gaining knowledge about what it is like to have experiences they have not had themselves. This article examines this puzzle and tries to solve it by appealing to recent work on knowing-wh ascriptions. In closing I indicate the wider significance of these ideas by showing how they can help us to evaluate prominent arguments by Paul [2014; 2015a] concerning transformative experiences.
In ‘Freedom and Resentment’ P. F. Strawson argues that reactive attitudes like resentment and indignation cannot be eliminated altogether, because doing so would involve exiting interpersonal relationships altogether. I describe an alternative to resentment: a form of moral sadness about wrongdoing that, I argue, preserves our participation in interpersonal relationships. Substituting this moral sadness for resentment and indignation would amount to a deep and far-reaching change in the way we relate to each other – while keeping in place the interpersonal relationships, which, Strawson rightfully believes, cannot be eliminated.
The debate about the nature of knowledge-how is standardly thought to be divided between Intellectualist views, which take knowledge-how to be a kind of propositional knowledge, and Anti-Intellectualist views which take knowledge-how to be a kind of ability. In this paper, I explore a compromise position—the Interrogative Capacity view—which claims that knowing how to do something is a certain kind of ability to generate answers to the question of how to do it. This view combines the Intellectualist thesis that knowledge-how is a relation to a set of propositions with the Anti-Intellectualist thesis that knowledge-how is a kind of ability. I argue that this view combines the positive features of both Intellectualism and Anti-Intellectualism.
Comparativism is the position that the fundamental doxastic state consists in comparative beliefs (e.g., believing p to be more likely than q), with partial beliefs (e.g., believing p to degree x) being grounded in and explained by patterns amongst comparative beliefs that exist under special conditions. In this paper, I develop a version of comparativism that originates with a suggestion made by Frank Ramsey in his ‘Probability and Partial Belief’ (1929). By means of a representation theorem, I show how this ‘Ramseyan comparativism’ can be used to weaken the (unrealistically strong) conditions required for probabilistic coherence that comparativists usually rely on, while still preserving enough structure to let us retain the usual comparativists’ account of quantitative doxastic comparisons.
A number of naturalistic philosophers of mind endorse a realist attitude towards the results of Bayesian cognitive science. This realist attitude is currently unwarranted, however. It is not obvious that Bayesian models possess special epistemic virtues over alternative models of mental phenomena involving uncertainty. In particular, the Bayesian approach in cognitive science is not more simple, unifying and rational than alternative approaches; and it not obvious that the Bayesian approach is more empirically adequate than alternatives. It is at least premature, then, to assert that mental phenomena involving uncertainty are best explained within the Bayesian approach. To continue on with an exclusive praise for Bayes would be dangerous as it risks monopolizing the center of attention, leading to the neglect of different but promising formal approaches. Naturalistic philosophers of mind would be wise instead to endorse an agnostic, instrumentalist attitude towards Bayesian cognitive science to correct their mistake.
People often talk about the synchronic Dutch Book argument for Probabilism and the diachronic Dutch Strategy argument for Conditionalization. But the synchronic Dutch Book argument for the Principal Principle is mentioned less. …
[The following is a guest post by Bob Lockie. — JS]He who says that all things happen of necessity can hardly find fault with one who denies that all happens by necessity; for on his own theory this very argument is voiced by necessity (Epicurus 1964: XL).Lockie, Robert. …
This essay is an opinionated exploration of the constraints that modal discourse imposes on the theory of assertion. Primary focus is on the question whether modal discourse challenges the traditional view that all assertions have propositional content. This question is tackled largely with reference to discourse involving epistemic modals, although connections with other flavors of modality are noted along the way.
There is an emerging skepticism about the existence of testimonial knowledge-how (Hawley (2010) , Poston (2016), Carter and Pritchard (2015a)). This is unsurprising since a number of influential approaches to knowledge-how struggle to accommodate testimonial knowledge how. Nonetheless, this scepticism is misguided. This paper establishes that there are cases of easy testimonial knowledge-how. It is structured as follows: First, a case is presented in which an agent acquires knowledge-how simply by accepting a speaker’s testimony. Second, it is argued that this knowledge-how is genuinely testimonial. Next, Poston’s (2016) arguments against easy testimonial knowledge-how are considered and rejected. The implications of the argument differ for intellectualists and anti-intellectualists about knowledge-how. The intellectualist must reject widespread assumptions about the communicative preconditions for the acquisition of testimonial knowledge. The Anti-intellectualist must find a way of accommodating the dependence of knowledge-how on speaker reliability. It is not clear how this can be done.
Many of our mental states such as beliefs and desires are
intentional mental states, or mental states with content. Externalism with regard to mental content says that in order
to have certain types of intentional mental states (e.g. beliefs), it
is necessary to be related to the environment in the right way. Internalism (or individualism) denies this, and it
affirms that having those intentional mental states depends solely on
our intrinsic properties. This debate has important consequences with
regard to philosophical and empirical theories of the mind, and the
role of social institutions and the physical environment in
constituting the mind.
Three arguments against universally regular probabilities have been posed based on examples where, if regularity holds, then perfectly similar events must have different probabilities. Howson (2017) and Benci et al. (2016) have raised technical objections to these symmetry arguments, but their objections fail. Howson says that Williamson’s (2007) “isomorphic” events are not in fact isomorphic, but Howson is speaking of set-theoretic representations of events in a probability model. While those sets are not isomorphic, Williamson’s physical events are, in the relevant sense. Benci et al. claim that all three arguments rest on a conflation of different models, but they do not. They are founded on the premise that similar events should have the same probability in the same model, or in one case, on the assumption that a single rotation-invariant distribution is possible. Having failed to refute the symmetry arguments on such technical grounds, one could deny their implicit premises, which is a heavy cost, or adopt varying degrees of instrumentalism or pluralism about regularity, but that would not serve the project of accurately modelling chances.
A heated debate surrounds the significance of reproducibility as an indicator for research quality and reliability, with many commentators linking a “crisis of reproducibility” to the rise of fraudulent, careless and unreliable practices of knowledge production. Through the analysis of discourse and practices across research fields, I point out that reproducibility is not only interpreted in different ways, but also serves a variety of epistemic functions depending on the research at hand. Given such variation, I argue that the uncritical pursuit of reproducibility as an overarching epistemic value is misleading and potentially damaging to scientific advancement. Requirements for reproducibility, however they are interpreted, are one of many available means to secure reliable research outcomes. Furthermore, there are cases where the focus on enhancing reproducibility turns out not to foster high-quality research. Scientific communities and Open Science advocates should learn from inferential reasoning from irreproducible data, and promote incentives for all researchers to explicitly and publicly discuss (1) their methodological commitments, (2) the ways in which they learn from mistakes and problems in everyday practice, and (3) the strategies they use to choose which research component of any project needs to be preserved in the long term, and how.
What does it mean for something, like the fact that rain is forecast, to be a normative reason for an action like taking your umbrella, or attitude like believing it will rain? According to a widely and perennially popular view, concepts of “reasons” are all concepts of some kind of explanation. But explanations of what? On one way of developing this idea, the concept of a normative reason for an agent S to perform an action A is that of an explanation why it would be good (in some way, to some degree) for S to do A. This Reasons as Explanations of Goodness hypothesis (REG) has numerous virtues, and has had a number of champions. But like every other extant theory of normative reasons it faces some significant challenges, which prompt many more philosophers to be skeptical that it can correctly account for (all) our reasons. This paper demonstrates how five different puzzles about normative reasons can be solved by careful attention to the concept of goodness, and in particular observing the ways in which it—and consequently, talk about reasons—is sensitive to context. Rather than asking simply whether or not certain facts are reasons for S to do A, we need to explore the contexts in which it is and is not correct to describe a certain fact as “a reason” for S to do A.
Do we have a duty to explore space? In part one, I looked at Schwartz’s positive case for the existence of such a duty. That positive case rested on three main arguments. The first argument claimed that we have a duty to explore space in order to access scarce resources. …
This paper argues that the controversy over GM crops is not best understood in terms of the supposed bias, dishonesty, irrationality, or ignorance on the part of proponents or critics, but rather in terms of differences in values. To do this, the paper draws upon and extends recent work of the role of values and interests in science, focusing particularly on inductive risk and epistemic risk, and it shows how the GMO debate can help to further our understanding of the various epistemic risks that are present in science and how these risks might be managed.
This paper distinguishes two reasoning strategies for using a model as a “null”. Null modeling evaluates whether a process is causally responsible for a pattern by testing it against a null model. Baseline modeling measures the relative significance of various processes responsible for a pattern by detecting deviations from a baseline model. Scientists sometimes conflate these strategies because their formal similarities, but they must distinguish them lest they privilege null models as accepted until disproved. I illustrate this problem with the neutral theory of ecology and use this as a case study to draw general lessons. First, scientists cannot draw certain kinds of causal conclusions using null modeling. Second, scientists can draw these kinds of causal conclusions using baseline modeling, but this requires more evidence than does null modeling.
Bayesian epistemology proposes norms on degrees of belief that are supposed to constitute rational ideals. The most widely endorsed norm is probabilism, which requires ideally rational agents to have degrees of belief that can be represented by a probability function. Unfortunately, probabilistic coherence is unattainable for human thinkers, because fully complying with the norm is too difficult for us. In response, Bayesians suggest that for limited thinkers, probabilistic coherence is an ideal to be approximated. We are supposedly better off the more closely our credences approximate the ideal. However, it is rarely discussed exactly in what sense credences are better if they approximate coherence more closely. In this article, we first clarify the way in which approximating coherence needs to be beneficial in order for probabilism to constitute an ideal in the intended sense. In Section 3, we present existing results from the literature that support the idea that probabilism is an ideal that should be approximated: On some measures of incoherence, being less incoherent reduces vulnerability to Dutch books. Furthermore, given certain other incoherence measures, some ways of being less incoherent have guaranteed benefits for the accuracy of one’s credences. The problem is that these known results rely on different ways of measuring closeness to coherence.
tion to perform in order to change a currently undesirable situation. The policymaker has at her disposal a team of experts, each with their own understanding of the causal dependencies between different factors contributing to the outcome. The policymaker has varying degrees of confidence in the experts’ opinions. She wants to combine their opinions in order to decide on the most effective intervention. We formally define the notion of an effective intervention, and then consider how experts’ causal judgments can be combined in order to determine the most effective intervention. We define a notion of two causal models being compatible, and show how compatible causal models can be combined. We then use it as the basis for combining experts causal judgments. We illustrate our approach on a number of real-life examples.
We provide formal definitions of degree of blameworthiness and intention relative to an epistemic state (a probability over causal models and a utility function on outcomes). These, together with a definition of actual causality, provide the key ingredients for moral responsibility judgments. We show that these definitions give insight into commonsense intuitions in a variety of puzzling cases from the literature.
There is a fundamental disagreement about which norm regulates assertion. Proponents of factive accounts argue that only true propositions are assertable, whereas proponents of nonfactive accounts insist that at least some false propositions are. Puzzlingly, both views are supported by equally plausible (but apparently incompatible) linguistic data. This paper delineates an alternative solution: to understand truth as the aim of assertion, and pair this view with a non-factive rule. The resulting account is able to explain all the relevant linguistic data, and finds independent support from general considerations about the differences between rules and aims.
We consider Geanakoplos and Polemarchakis’s generalization of Aumman’s famous result on “agreeing to disagree”, in the context of imprecise probability. The main purpose is to reveal a connection between the possibility of agreeing to disagree and the interesting and anomalous phenomenon known as dilation. We show that for two agents who share the same set of priors and update by conditioning on every prior, it is impossible to agree to disagree on the lower or upper probability of a hypothesis unless a certain dilation occurs. With some common topological assumptions, the result entails that it is impossible to agree not to have the same set of posterior probabilities unless dilation is present. This result may be used to generate sufficient conditions for guaranteed full agreement in the generalized Aumman-setting for some important models of imprecise priors, and we illustrate the potential with an agreement result involving the density ratio classes. We also provide a formulation of our results in terms of “dilation-averse” agents who ignore information about the value of a dilating partition but otherwise update by full Bayesian conditioning. Keywords: agreeing to disagree; common knowledge; dilation; imprecise probability.
Possible worlds models of belief have difficulties accounting for unawareness, the inability to entertain (and hence believe) certain propositions. Accommodating unawareness is important for adequately modelling epistemic states, and representing the informational content to which agents have access given their explicit beliefs. In this paper, I use neighbour-hood structures to develop an original multi-agent model of explicit belief, awareness, and informational content, along with an associated sound and complete axiom system. I defend the model against the seminal impossibility result of Dekel et al. (1998), according to which three intuitive conditions preclude non-trivial unawareness on any ‘standard’ model of knowledge or belief. I argue that at least one of these conditions is implausible when applied to a model of belief. The plausibility of the other two rests on further questions regarding the scope and granularity of mental content. Finally, I show that, once we’ve jettisoned the least plausible of these conditions, it’s possible to strengthen the remainder while retaining non-trivial unawareness within a possible worlds model of belief with unawareness.
This entry focuses on the phenomenon of clinical delusions. Although
the nature of delusions is controversial, as we shall see, delusions
are often characterised as strange beliefs that appear in the context
of mental distress. Indeed, clinical delusions are a symptom of
psychiatric disorders such as dementia and schizophrenia, and they
also characterize delusional disorders. The following case
descriptions include one instance of erotomania, the delusion that
one is loved by someone else, often of higher status, and one instance
of Cotard delusion, the delusion that one is dead or disembodied.
The domain of “folk-economics” consists in explicit beliefs about the economy held by laypeople, untrained in economics, about such topics as e.g., the causes of the wealth of nations, the benefits or drawbacks of markets and international trade, the effects of regulation, the origins of inequality, the connection between work and wages, the economic consequences of immigration, or the possible causes of unemployment. These beliefs are crucial in forming people’s political beliefs, and in shaping their reception of different policies. Yet, they often conflict with elementary principles of economic theory and are often described as the consequences of ignorance, irrationality or specific biases. As we will argue, these past perspectives fail to predict the particular contents of popular folk-economic beliefs and, as a result, there is no systematic study of the cognitive factors involved in their emergence and cultural success. Here we propose that the cultural success of particular beliefs about the economy is predictable if we consider the influence of specialized, largely automatic inference systems that evolved as adaptations to ancestral human small-scale sociality.
According to a conventional view, there exists no common-cause model of quantum correlations satisfying locality requirements. In fact, Bell ’s inequality is derived from some locality conditions and the assumption that the common cause exists, and the violation of the inequality has been experimentally verified. On the other hand, some researchers argue that in the derivation of the inequality the existence of a common common-cause for multiple correlations is implicitly assumed, and that the assumption is unreasonably strong. According to their idea, what is necessary for explaining the quantum correlation is a common cause for each correlation. However, in this paper, we will show that in almost all entangled states we can not construct a local model that is consistent with quantum mechanical prediction even when we require only the existence of a common cause of each correlation.
Liberal evidentialists disagree with conservative evidentialists about the nature of evidential support. According to the latter, a body of total evidence must always support a single attitude toward a given proposition better than it supports any alternative attitude toward that proposition. According to the former, a body of total evidence needn’t always support a single attitude toward a given proposition better than it supports any alternative attitude toward that proposition. Both views come in doxastic and credal versions. Credal versions concern the question whether a body of total evidence must always support a unique credence in a given proposition. Doxastic versions concern the question whether a body of total evidence must always support a unique doxastic attitude toward a given proposition, where the doxastic attitudes in question are just belief, disbelief, and suspension of judgment. In this paper, I focus on the doxastic versions of these views. I argue that the doxastic version of conservative evidentialism has unacceptable theoretical costs if it doesn’t have straightforward counterexamples, address the most plausible arguments against doxastic liberal evidentialism, and highlight some consequences of doxastic liberal evidentialism for epistemic agency and the epistemology of disagreement.
Purists think that changes in our practical interests can’t affect what we know unless those changes are truth-relevant with respect to the propositions in question. Pragmatists disagree. They think changes in our practical interests can affect what we know even if those changes aren’t truth-relevant with respect to the propositions in question. I argue that pragmatists are right, but for the wrong reasons, since pragmatists haven’t appreciated the best argument for their own view. As I show, there is an argument for pragmatism sitting in plain sight that is considerably more plausible than any extant argument for pragmatism. How, if at all, do our practical interests affect our knowledge? According to the thesis I will call ‘purism,’ changes in our practical interests can’t affect what we know unless those changes are truth-relevant with respect to the propositions in question. According to the negation of this thesis, which I will call ‘pragmatism,’ changes in our practical interests can affect what we know even if those changes aren’t truth-relevant with respect to the propositions in question. If pragmatism is right, then changes in our practical interests might affect our knowledge without affecting our evidence for the relevant proposition, the reliability of the cognitive faculties responsible for our belief in that proposition, the safety of our belief in that proposition, and so on, for any other truth-relevant property that we might care about.
The on-going debate over the ‘admissible contents of perceptual experience’ concerns the range of properties that human beings are directly acquainted with in perceptual experience. Regarding vision, it is relatively uncontroversial that the following properties can figure in the contents of visual experience: colour, shape, illumination, spatial relations, motion, and texture. The controversy begins when we ask whether any properties besides these figure in visual experience. We argue that ‘ensemble properties’ should be added to the list of visually admissible properties. Ensemble properties are features that belong to a set of perceptible objects as a whole as opposed to the individuals that constitute that set. They include such features as the mean size of an array of shapes or the average emotional expression of an array of faces. Recent work in vision science has yielded compelling evidence that the visual system routinely encodes such properties. We argue that epistemological considerations provide strong reasons to think that these properties figure in visual experience. Judgements about ensemble properties are immediately warranted by our perceptual experience, and the only plausible way that a perceptual experience could confer this warrant is if it confers awareness of ensemble properties.