In this paper, I show how one might resist two influential arguments for the Likelihood Principle by appealing to the ontological significance of creative intentions. The first argument for the Likelihood Principle that I consider is the argument from intentions. After clarifying the argument, I show how the key premiss in the argument may be resisted by maintaining that creative intentions sometimes independently matter to what experiments exist. The second argument that I consider is Gandenberger’s (2015) rehabilitation of Birnbaum’s (1962) proof of the Likelihood Principle from the (supposedly) more intuitively obvious principles of conditionality and sufficiency. As with the argument from intentions, I show how Gandenberger’s argument for his Experimental Conditionality Principle may be resisted by maintaining that creative intentions sometimes independently matter to what experiments exist.
The relational interpretation of quantum mechanics proposes to solve the measurement problem and reconcile completeness and locality of quantum mechanics by postulating relativity to the observer for events and facts, instead of an absolute “view from nowhere”. The aim of this paper is to clarify this interpretation, and in particular, one of its central claims concerning the possibility for an observer to have knowledge about other observer’s events. I consider three possible readings of this claim (deflationist, relationist and relativist), and develop the most promising one, relativism, to show how it fares when confronted with the traditional interpretative problems of quantum mechanics. Although it provides answers to some problems, I claim that there is currently no adapted locality criterion to evaluate whether the resulting interpretation is local or not.
This paper semantically analyzes “free perception” sequences in pictorial narratives such as comics, where one panel shows a character looking, and the next panel shows what they see. Pictorial contents are assumed to be viewpoint-centered propositions. A framework for the representation of pictorial narratives is used where indexing and embedding of certain panels is characterized by hidden operators. The resulting enriched pictorial narratives are interpreted in a dynamic framework. A possible worlds construction using action alternatives captures the epistemic effect of perceptual actions. Free perception sequences are implicitly anaphoric, as analyzed using cross-panel indexing. It is argued that some cases of free perception are truly intensional, and must involve embedding in the framework that is employed. Examples are drawn from comics and film.
Anna Alexandrova, A Philosophy for the Science of Well-Being (OUP, 2017)Here’s an attitude I sometimes encounter among scientists: “It is not my job as a scientist to figure out what true well-being is and to choose my constructs accordingly. …
I’d like to explain a conjecture about Wigner crystals, which we came up with in a discussion on Google+. It’s a purely mathematical conjecture that’s pretty simple to state, motivated by the picture above. …
Like most other ancient philosophers, Plato maintains a virtue-based
eudaemonistic conception of ethics. That is to say, happiness or
well-being (eudaimonia) is the highest aim of moral thought
and conduct, and the virtues (aretê:
‘excellence’) are the requisite skills and dispositions
needed to attain it. If Plato’s conception of happiness is
elusive and his support for a morality of happiness seems somewhat
subdued, there are several reasons. First, he nowhere defines the
concept or makes it the direct target of investigation, but introduces
it in an oblique way in the pursuit of other questions.
Until recently, I assumed everyone agreed to something like this principle:
If performing an action constitutes you as a bad person, the action is morally wrong. Virtue ethicists, of course, make this a biconditional that defines wrongness, but I would have assumed that just about everybody would agree that the conditional (1) is true. …
This paper considers the temporal dimension of data processing and use, and the ways in which it affects the production and interpretation of knowledge claims. I start by distinguishing the time at which data collection, dissemination and analysis occur (Data time, or Dt) from the time in which the phenomena for which data serve as evidence operate (Phenomena time, or Pt). Building on the analysis of two examples of data re-use from modelling and experimental practices in biology, I then argue that Dt affects how researchers (1) select and interpret data as evidence and (2) identify and understand phenomena.
The paper highlights how a popular version of epistemological disjunctivism (Pritchard 2012, 2016) labours under a kind of ‘internalist challenge’—a challenge that seems to have gone largely unacknowledged by disjunctivists. This is the challenge to vindicate the supposed ‘internalist insight’ that disjunctivists claim their view does well to protect (cf.
In our last several chapters we have defended Jonathan Bennett’s Simple Theory of Counterfactuals. One consequence of Bennett’s theory is that counterfactual backtracking – supposing that the past would be different if the present were different—is legitimate. We closed our last chapter endorsing backtracking writ large by arguing that nomological determinism entails counterfactual determinism.
[Thanks to the Singularity Bros podcast for inspiring me to write this post. It was a conversation I had with the hosts of this podcast that prompted me to further elaborate on the idea of ethical behaviourism.] …
What is meant by ‘One True Logic’ is sometimes not made entirely clear — what is a logic and what is it for one of them to be true? Since the study of logic involves giving a theory of logical consequence for formal languages, the view must be that there is one true theory of logical consequence. In order for such a logic to be true, it must be capable of correct representation. What do logics represent? It is clear from the various uses of applied logic, they can represent many different sorts of phenomena. But for the purposes of traditional pure logic, though, theories of consequence are frequently taken to represent natural language inference.
Pierre Bayle (1647–1706) was a Huguenot, i.e., a French
Protestant, who spent almost the whole of his productive life as a
refugee in Holland. His life was devoted entirely to scholarship, and
his erudition was second to none in his, or perhaps any,
period. Although much of what he wrote was embedded in technical
religious issues, for a century he was one the most widely read
philosophers. In particular, his Dictionnaire historique et
critique was among the most popular works of the eighteenth
century. The content of this huge and strange, yet fascinating work is
difficult to describe: history, literary criticism, theology,
obscenity, in addition to philosophical treatments of toleration, the
problem of evil, epistemological questions, and much more.
Richard Lewontin is often cited as an inspiration and founder of what is now known as Niche Construction Theory. The first goal of this paper is to argue that they present distinct arguments from niche construction against Adaptationism. While Niche Construction Theory argues that natural selection is not the only adaptive evolutionary force, Lewontin rejects the externalist characterization of natural selection. The key difference lies in the types of phenomena that are allowed to count as “niche construction” and their argumentative roles. The second goal is to argue that it is time to revive Lewontin’s argument. I argue that it finds renewed support in Denis Walsh’s ecological affordance framework (Walsh 2015) and empirical evidence from ecological developmental biology (Sultan 2015). Reexamining the roots of Niche Construction Theory (Odling-Smee 1988), I suggest that the rich conceptual resources therein provide a way for Niche Construction Theory to also develop a neo-Lewontinian argument against the externalism of natural selection explanations.
In an earlier post, I argued that the definition of omniscience as knowing every truth and believing nothing but truths is insufficient for omniscience because an omniscient being would also be certain, and knowledge of every truth does not guarantee certainty of every truth. …
I’ve been thinking about Thomson’s Violinist case. I should say about that case that it seems utterly obvious to me that in the case where the violinist is your child and you are in no long term danger from the connection, it’s a vicious failure of parental duties to disconnect. …
Anna Alexandrova, A Philosophy for the Science of Well-Being (OUP, 2017)Different people expect different things from theories of well-being. Some expect that they systematise in a maximally general way intuitions about goods that constitute well-being, others that they states most important causes of well-being, still others that they help them to lead a good life. …
Decision theory notes
Posted on Tuesday, 05 Dec 2017
In the past three months I wrote a draft of a possible textbook on
decision theory. Here it is. I've used these notes as basis for my honours/MSc course "Belief,
Desire, and Rational Choice". …
The idea that gauge theory has ‘surplus’ structure poses a puzzle: in one much discussed sense, this structure is redundant; but on the other hand, it is also widely held to play an essential role in the theory. In this paper, we employ category-theoretic tools to illuminate an aspect of this puzzle. We precisify what is meant by ‘surplus’ structure by means of functorial comparisons with equivalence classes of gauge fields, and then show that such structure is essential for any theory that represents a rich collection of physically relevant fields which are ‘local’ in nature.
Kuhn argued that scientific theory choice is, in some sense, a rational matter, but one that is not fully determined by shared objective scientific virtues like accuracy, simplicity, and scope. Okasha imports Arrow’s impossibility theorem into the context of theory choice to show that rather than not fully determining theory choice, these virtues cannot determine it at all. If Okasha is right, then there is no function (satisfying certain desirable conditions) from ‘preference’ rankings supplied by scientific virtues over competing theories (or models, or hypotheses) to a single all-things-considered ranking. This threatens the rationality of science. In this paper we show that if Kuhn’s claims about the role that subjective elements play in theory choice are taken seriously, then the threat dissolves.
Discussing the contemporary debate about the metaphysics of relations and structural realism, I analyse the philosophical significance of relational quantum mechanics (RQM). Relativising properties of objects (or systems) to other objects (or systems), RQM affirms that reality is inherently relational. My claim is that RQM can be seen as an instantiation of the ontology of ontic structural realism, for which relations are prior to objects, since it provides good reasons for the argument from the primacy of relation. In order to provide some evidence, RQM is interpreted focusing on its metametaphysics, in particular in relation to the very concept of relation, and to the meaning such concept assumes in the dispute between realism and antirealism.
In this paper I connect two debates in the philosophy of science; the questions of scientific representation and both model and theoretical equivalence. I argue that by paying attention to how a model is used to draw inferences about its target system, we can define a notion of theoretical equivalence that turns on whether their models licence the same claims about the same target systems. I briefly consider the implications of this for two questions that have recently been discussed in the context of the formal philosophy of science.
« The destruction of graduate education in the United States
As everyone knows, the flaming garbage fire of a tax bill has passed the Senate, thanks to the spinelessness of John McCain, Lisa Murkowski, Susan Collins, and Jeff Flake. …
I'm puzzled that in the literature on the nature of sex, gender, race, etc., there are so few philosophers who take a biological realist stance. Maybe this is a function of who is drawn to these topics. …
Recently, I’ve been worried about arguments like this:
It is always more perfect to be able to do more things. Being able to do impossible things is a way of being able to do more things. So, a perfect being can do impossible things. …
Classical higher-order logic, when utilized as a meta-logic in which various other (classical and non-classical) logics can be shallowly embedded, is well suited for realising a universal logic reasoning approach. Universal logic reasoning in turn, as envisioned already by Leibniz, may support the rigorous formalisation and deep logical analysis of rational arguments within machines. A respective universal logic reasoning framework is described and a range of exemplary applications are discussed. In the future, universal logic reasoning in combination with appropriate, controlled forms of rational argumentation may serve as a communication layer between humans and intelligent machines.
Starting from a generalization of the standard axioms for a monoid we present a step-wise development of various, mutually equivalent foundational axiom systems for category theory. Our axiom sets have been formalized in the Isabelle/HOL interactive proof assistant, and this formalization utilizes a semantically correct embedding of free logic in classical higher-order logic. The modeling and formal analysis of our axiom sets has been significantly supported by series of experiments with automated reasoning tools integrated with Isabelle/HOL. We also address the relation of our axiom systems to alternative proposals from the literature, including an axiom set proposed by Freyd and Scedrov for which we reveal a technical issue (when encoded in free logic): either all operations, e.g. morphism composition, are total or their axiom system is inconsistent. The repair for this problem is quite straightforward, however.
Comparativism is the view that comparative beliefs (e.g., believing p to be more likely than q) are more fundamental than partial beliefs (e.g., believing p to some degree x). In this paper, I first provide an account of how comparativism can make sense of quantitative comparisons (e.g., believing p twice as much as q), which generalises and improves upon the standard comparativist approach. This is achieved by means of a simple ‘Ramseyan’ representation theorem, with axioms demonstratively weaker than those to which comparativists usually appeal. I then provide a number of arguments against comparativism. Ultimately, there are too many things that we ought to be able to say about partial beliefs that we cannot say under any version of comparativism. Moreover, there are alternative ways to account for the measurement of belief that need not face the same limitations.
My interest in what is now called the science of well-being dates back to my graduate school days at UC San Diego. Sometime in the mid-aughts I came across a debate between psychologists who advanced ‘hedonic profile’ measures of happiness and those who favoured life satisfaction questionnaires. …
Last time, I argued that there are substantive open questions about whether the theoretical constructs of formal linguistics play any role in the psychological processes underlying language use. Let’s now address those questions.When people talk about “the psychological reality of syntax”, there are (at least) two importantly different types of psychological state that they might have in mind. …