.A world beyond p-values? I was asked to write something explaining the background of my slides (posted here) in relation to the recent ASA “A World Beyond P-values” conference. I took advantage of some long flight delays on my return to jot down some thoughts:
The contrast between the closing session of the conference “A World Beyond P-values,” and the gist of the conference itself, shines a light on a pervasive tension within the “Beyond P-Values” movement. …
Deductive reasoning is one way by which we acquire new beliefs. Some of these beliefs so acquired amount to knowledge; others do not. Here are two principles, each of which states a sufficient condition for acquiring knowledge through deduction: Single-premise closure (SPC) For any propositions P and Q, and for any subject S, if S knows P and comes to believe Q solely on the basis of competently deducing it from P , while retaining knowledge of P throughout, then S knows Q.
In this paper I will draw attention to an important route to external world skepticism, which I will call confidence skepticism. I will argue that we can defang confidence skepticism (though not a meeker ‘argument from might’ which has got some attention in the 20th century literature on external world skepticism) by adopting a partially psychologistic answer to the problem of priors. And I will argue that certain recent work in the epistemology of mathematics and logic provides independent support for such psychologism.
In this paper I discuss a trivialization worry for currently popular formulations of the ‘access problem’ in philosophy of mathematics. I argue that we can avoid this worry by relating access worries to general epistemic norms of coincidence avoidance. Specifically, I propose that a realist theory of some domain of investigation (such as mathematics or morals) faces an access problem to the extent that accepting this theory commits one to positing any extra unexplained coincidence beyond those required by competing deflationary approaches to the same domain. I then use this formulation of the access problem to diagnose what goes wrong in Justin Clarke-Doane’s recent argument that there can be no access problem.
A well-known problem, noticed by Meirav, is that it is difficult to distinguish hope from despair. Both the hoper and the despairer are unsure about an outcome and they both have a positive attitude towards it. …
Transformative Experience is a rich, insightful, compelling book. LA Paul persuasively argues that our standard way of thinking about major life choices (and some minor ones too) is inadequate, because it fails to take into account the subjective phenomenal values of lived experiences. When deciding whether to do something, we need to assess how good the outcome will be for us. But Paul argues that in many such cases, we simply don’t have enough information to do this. And that’s because we don’t have information about the subjective phenomenal value of the experience we’re considering - that is, we don’t know what it’s like (for us) to have that experience. This means our decision is inherently under-informed. We can’t decide how to assign values to possible outcomes (undergoing the experience or failing to undergo the experience) because we don’t have a complete picture of what those values really are.
In this article, I argue against Kearns and Star’s reasons-as-evidence view, which identifies normative reasons to � with evidence that one ought to �. I provide a new counterexample to their view, the student case, which involves an inference to the best explanation from means to end or, more generally, from a derivative to a more foundational “ought” proposition. It shows that evidence that one ought to act a certain way is not in all cases a reason so to act. I present a diagnosis of the problem that is brought out by the counterexample
Excuses are commonplace. They are part and parcel of our ordinary practice of holding each other morally responsible. But excuses are also curious. They have normative force. Whether someone has an excuse for something they have done matters for how it is rational to respond to their action. For example, an excuses can make it rational to forgo blame, to revise judgments of blameworthiness, and to feel compassion and pity instead of anger and resentment.
Objectual understanding—viz., the sort of understanding one has when one understands a subject matter or body of information—is often thought to be factive, in a way that (for example) mere coherent delusions are not. In short, understanding a subject matter demands we have at least some true beliefs about the subject matter in question¹. That being said, it is ubiquitous to claim that we understand some false subject matters or theories. For example, most high-school students have some understanding of Ptolemy’s earth-centred view of the universe, even though the Ptolemaic view is premised on a false conception of what revolves around what. One very natural way to reconcile the kind of factivity demanded of understanding with the datum that we can plausibly count as understanding false theories, models or subject matters is to point out a relevant fact about the way we regard ourselves as understanding (for instance) the Ptolemaic view: we understand it as false, which is to say, we see how the view holds together while at the same time appreciating that the view does not accurately represent what it purports to.
Can purely predictive models be useful in investigating causal systems? I argue “yes”. Moreover, in many cases not only are they useful, they are essential. The alternative is to stick to models or mechanisms drawn from well-understood theory. But a necessary condition for explanation is empirical success, and in many cases in social and field sciences such success can only be achieved by purely predictive models, not by ones drawn from theory. Alas, the attempt to use theory to achieve explanation or insight without empirical success therefore fails, leaving us with the worst of both worlds – neither prediction nor explanation. Best go with empirical success by any means necessary. I support these methodological claims via case studies of two impressive feats of predictive modelling: opinion polling of political elections, and weather forecasting.
Let us say that your ends are whatever you have ultimate (underived) reason to do or to bring about. This leaves open whether your ends are stance dependent and so “given” to you by your contingent desires, your nature as a rational being, your self-identity, etc., or whether at least some of your ends are not stance dependent and so not “given” to you in any of these ways. So understood, it is uncontroversial that you have reason to take the means to your ends. More specifically, some reason to do or bring about an end is going to transmit to reason to take the means. Using this as a point of departure, this paper considers what it is to be a means to an end and how much reason transmits from an end to its means. The theory on offer is a probability-raising theory that says roughly this: an action is a means to an end just in case it raises the probability of the end relative to the worst one could do—i.e., relative to that action that would make the end least probable. And the amount of reason transmitted from an end to a given means is a function of the degree to which it raises the probability of the end relative to the worst one could do.
I f I come to think that I ought to go to Sweden, we might think that this judgment is somewhat appetitive: if I really think this, I must be somewhat inclined to go. But in contrast, if I judge that you ought to go to Sweden, it is far less clear that this involves any kind of inclination on my part: I might really think that you ought to go, but need not be at all in favor of your doing so (indeed, perhaps I would much prefer you to shirk your duties and stay). Other-regarding normative judgments seem to be a matter of mere recognition, not inclination. This casts doubt on noncognitivist views according to which all normative judgments are desire-like. But it fits much better with theories in the vicinity of desire-as-belief, which identify only some normative judgments with desires. So I shall argue. This paper is split into seven sections. Section 1 describes a natural way of formulating noncognitivism, which I label conativism. Section 2 describes the motivation argument, and presents a version that escapes some standard criticisms of that argument. In section 3, I argue that other-regarding normative judgments present a problem for the motivation argument, and indeed present a problem for conativism itself. Sections 4 and 5 consider two possible replies. Section 6 very briefly describes how the problem relates to the Frege-Geach problem. Section 7 argues that some other theories—such as desire-as-belief—may be able to accommodate the motivational role of normative judgment without falling prey to the same problem.
According to Nomy Arpaly and Zach Barnett, some philosophers prefer Truth and others prefer Dare. I love the distinction. It helps us see an important dynamic in the field. But it's not exhaustive. I think there are also Wonder philosophers. …
This was posted originally at the OUPBlog. This is a first in a series of cross-posted blogs by Roy T Cook (Minnesota) from the OUPBlog series on Paradox and Puzzles. The Liar paradox arises via considering the Liar sentence:
L: L is not true. …
You are a secret opponent of the Nazi regime, and you happen to see Schmidt sneaking up on Hitler with an axe and murderous intent. You know what’s happening: Schmidt believes that Hitler has been committing adultery with Mrs. Schmidt, and is going to murder Hitler. …
Consider the following:Credit: xkcd (https://xkcd.com/882/) Obviously this is bad science and even worse scientific reporting, but what can be done to combat it? More generally, what should be the scholarly response to the growing sense, among scientific researchers and the lay public alike, that scientific publications are not trustworthy — that is, that the report of a statistically significant finding in a reputable scientific journal does not in general warrant drawing any meaningful conclusions?A new paper in the journal Nature Human Behavior proposes a simple but radical solution: the default P-value threshold for statistical significance should be changed from 0.05 to 0.005 for claims of new discoveries.The paper has dozens of co-authors, many of them quite distinguished. …
The essay introduces the problem of aesthetic unreliability, the variety of ways in which it is difficult to grasp our aesthetic experience and the consequent confusion and unreliability of what we take as our taste.
« Michael Cohen (1992-2017)
Also against individual IQ worries
Scott Alexander recently blogged “Against Individual IQ Worries.” Apparently, he gets many readers writing to him terrified that they scored too low on an IQ test, and therefore they’ll never be able to pursue their chosen career, or be a full-fledged intellectual or member of the rationalist community or whatever. …
Doxastic characterizations of the set of Nash equilibrium outcomes and of the set of backward-induction outcomes are provided for general perfect-information games (where there may be multiple backward-induction solutions). We use models that are behavioral, rather than strategy-based, where a state only specifies the actual play of the game and not the hypothetical choices of the players at nodes that are not reached by the actual play. The analysis is completely free of counterfactuals and no belief revision theory is required, since only the beliefs at reached histories are specified.
Stewart Cohen has formulated what he calls the problem of easy knowledge, which is said to plague any epistemological position that posits basic knowledge. The problem of easy knowledge takes two forms. Here we want to focus on the bootstrapping version of the problem. We argue that Cohen’s misgivings only apply to a simple and rather silly form of bootstrapping, while there are unproblematic and important forms of bootstrapping. We develop the distinction between the problematic and unproblematic form of bootstrapping, and argue that good bootstrapping poses no problem for proponents of epistemic entitlement—properly understood. Indeed, we argue that an inarticulate form of good bootstrapping is important in becoming the kinds of competent perceptual systems that many of us humans are. At the same time, we think that what we have to say about this good bootstrapping has important consequences for how one should understand the epistemic entitlement characteristic of human perception.
Philosophical methods play a crucial role in philosophical inquiry. When it comes to questions about the nature, status, and content of morality—the special purview of moral philosophy—we look to philosophical methods to help guide the construction of normative and metaethical theories, and to provide the basis for evaluating their individual and comparative merits. One of the tasks of moral epistemology is to determine how this is to be done well. Here we investigate the construction and evaluation of theories in metaethics, focusing on the nature of the methods that should govern metaethical theorizing and their relation to such theorizing. As we’ll see, doing so requires attending to both the possible goals of metaethical inquiry and (what we’ll call) the metaethical data—the source-material utilized by good methods to achieve those goals. The main claims about methods, goals, and data for which we’ll argue are these: First, candidate methods for metaethical theorizing must be assessed in light of the epistemic goal(s) of metaethical inquiry. Second, while there are a variety of epistemic goals that different methods may properly aspire to achieve, several prominent methods face significant challenges when assessed in light of the attractive goal of understanding.
We endorse Stanford’s project, which calls attention to features of human psychology that exhibit a “puzzling combination of objective and subjective elements,” and that are central to cooperation. However, we disagree with his delineation of the explanatory target. What he calls “externalization or objectification” conflates two separate properties, neither of which can serve as the mark of the moral.
Salience-sensitivity is a form of anti-intellectualism that says the following: whether a true belief amounts to knowledge depends on which error-possibilities are salient to the believer. I will investigate whether salience-sensitivity can be motivated by appeal to bank case intuitions. I will suggest that so-called third-person bank cases threaten to sever the connection between bank case intuitions and salience-sensitivity. I will go on to argue that salience-sensitivists can overcome this worry if they appeal to egocentric bias, a general tendency to project our own mental states onto others. I will then suggest that a similar strategy is unavailable to stakes-sensitivists, who hold that whether a true belief amounts to knowledge depends on what is at stake for the believer. Bank case intuitions motivate salience- but not stakes-sensitivity.
Descriptive decision theory is concerned with characterising and
explaining regularities in the choices that people are disposed to
make. It is standardly distinguished from a parallel enterprise,
normative decision theory, which seeks to provide an account of the
choices that people ought to be disposed to make. Much of the
work in this area has been devoted to the building and testing of
formal models that aim to improve on the descriptive adequacy of a
framework known as “Subjective Expected Utility” (SEU). This adequacy was first called into question in the middle of the last
century and further challenged by a slew of experimental work in
psychology and economics from the mid 1960s onwards.
Google's Self-Driving Car - via Becky Stern on Flickr
Swerve or slow down? That is the question. The question that haunts designers of self-driving cars. The dilemma will be familiar to students of moral philosophy. …
G.A. Barnard: The “catch-all” factor: probability vs likelihood
Posted on September 25, 2017 by Mayo
Barnard 23 Sept.1915 – 9 Aug.20
With continued acknowledgement of Barnard’s birthday on Friday, Sept.23, I reblog an exchange on catchall probabilities from the “The Savage Forum” (pp 79-84 Savage, 1962) with some new remarks. …
A lot of conventional work in formal epistemology proceeds under the assumption that subjects have precise credences. Traditional statements of the requirement of coherence presuppose that you have a precise credence function, for instance, and say that this function must satisfy the probability axioms. The traditional rule for updating says that you must update your precise credence function by conditionalizing it on the information that you learn. Meanwhile, fans of imprecise credences challenge the assumption behind these rules. They argue that your partial beliefs are best represented not by a single function, but by a set of functions, or representor.
G.A. Barnard: 23 Sept.1915 – 9 Aug.2002
Today is George Barnard’s birthday. I met him in the 1980s and we corresponded off and on until 1999. Here’s a snippet of his discussion with Savage (1962) (link below [i]) that connects to issues often taken up on this blog: stopping rules and the likelihood principle. …
Pragmatic encroachment theories have a problem with evidence. On the one hand, the arguments that knowledge is interest-relative look like they will generalise to show that evidence too is interest-relative. On the other hand, our best story of how interests affect knowledge presupposes an interest-invariant notion of evidence. The aim of this paper is to sketch a theory of evidence that is interest-relative, but which allows that ‘best story’ to go through with minimal changes. The core idea is that the evidence someone has is just what evidence a radical interpreter says they have. And a radical interpreter is playing a kind of game with the person they are interpreting. The cases that pose problems for pragmatic encroachment theorists generate fascinating games between the interpreter and the interpretee. They are games with multiple equilibria. To resolve them we need to detour into the theory of equilibrium selection. I’ll argue that the theory we need is the theory of risk-dominant equilibria. That theory will tell us how the interpreter will play the game, which in turn will tell us what evidence the person has. The evidence will be interest-relative, because what the equilibrium of the game is will be interest-relative. But it will not particularly undermine the story we tell about how interests usually affect knowledge.
Many think that sentences about what metaphysically explains what are true iff there exist grounding relations. This suggests that sceptics about grounding should be error theorists about metaphysical explanation. We think there is a better option: a theory of metaphysical explanation which offers truth conditions for claims about what metaphysically explains what that are not couched in terms of grounding relations, but are instead couched in terms of, inter alia, psychological facts. We do not argue that our account is superior to grounding-based accounts. Rather, we offer it to those already ill-disposed towards grounding.