
86424.886108
Information theory presupposes the notion of an epistemic agent, such as a scientist or an idealized human. Despite that, information theory is increasingly invoked by physicists concerned with fundamental physics, physics at very high energies, or generally with the physics of situations in which even idealized epistemic agents cannot exist. In this paper, I shall try to determine the extent to which the application of information theory in those contexts is legitimate. I will illustrate my considerations using the case of black hole thermodynamics and Bekenstein’s celebrated argument for his formula for the entropy of black holes. This example is particularly pertinent to the theme of the present collection because it is widely accepted as ‘empirical data’ in notoriously empirically deprived quantum gravity, even though the laws of black hole thermodynamics have so far evaded direct empirical confirmation.

150807.886177
For more than twenty five years, Fine has been challenging the traditional interpretation of the violations of Bell inequalities (BI) by experiment. A natural interpretation of Fine’s theorem is that it provides us with an alternative set of assumptions on which to put the blame for the failure of the BI, and a new interpretation of the violation of the BI by experiment should follow. This is not, however, how Fine interprets his theorem. Indeed, Fine claims that his result undermines other interpretations, including the traditional interpretation in terms of local realism. The aim of this paper is to understand and to assess Fine’s claims. We distinguish three different strategies that Fine uses in order to support his interpretation of his result. We show that none of these strategies is successful. Fine fails to prove that local realism is not at stake in the violation of the BI by quantum phenomena.

150910.886196
The counterfactual tradition to defining actual causation has come a long way since Lewis started it off. However there are still important open problems that need to be solved. One of them is the (in)transitivity of causation. Endorsing transitivity was a major source of trouble for the approach taken by Lewis, which is why currently most approaches reject it. But transitivity has never lost its appeal, and there is a large literature devoted to understanding why this is so. Starting from a survey of this work, we will develop a formal analysis of transitivity and the problems it poses for causation. This analysis provides us with a sufficient condition for causation to be transitive, a sufficient condition for dependence to be necessary for causation, and several characterisations of the transitivity of dependence. Finally, we show how this analysis leads naturally to several conditions a definition of causation should satisfy, and use those to suggest a new definition of causation.

194861.886222
There is a familiar philosophical position – sometimes called the doctrine of the open future – according to which future contingents (claims about underdetermined aspects of the future) systematically fail to be true. For instance: supposing that there are ways things could develop from here in which Trump is impeached, and in which he is not, it is not now true that Trump will be impeached, and not now true that Trump will not be impeached. For well over 2000 years, however, open futurists have been accused of denying certain logical laws – bivalence, excluded middle, or both – for entirely ad hoc reasons, most notably, that their denials are required for the preservation of something we hold dear. In a recent paper, however, I sought to argue that this deeply entrenched narrative ought to be overturned. My thought was this: given a popular, plausible approach to the semantics of future contingents, we can reduce the question of their status to the Russell/Strawson debate concerning presupposition failure, definite descriptions, and bivalence. In that case, we will see that open futurists in fact needn’t deny bivalence (Russell), or, if they do, they will do so for perfectly general (Strawsonian) reasons – reasons for which we all must deny bivalence. Of course, the metaphysical objections to the open futurist’s model of the future will remain just as they were. However, the millenniaold “semantic” or “logical” objections to the doctrine would be answered.

201760.886239
Computer simulation of an epistemic landscape model, modified to include explicit representation of a centralised funding body, show the method of funding allocation has significant effects on communal tradeoff between exploration and exploitation, with consequences for the community’s ability to generate significant truths. The results show this effect is contextual, and depends on the size of the landscape being explored, with funding that includes explicit random allocation performing significantly better than peerreview on large landscapes. The paper proposes a way of incorporating external institutional factors in formal social epistemology, and offers a way of bringing such investigations to bear on current research policy questions.

202535.886257
In this paper I investigate whether certain substructural theories are able to dodge paradox while at the same time containing what might be viewed as a naive validity predicate. To this end I introduce the requirement of internalization, roughly, that an adequate theory of validity should prove that its own metarules are validitypreserving. The main point of the paper is that substructural theories fail this requirement in various ways.

208970.886275
11 August 1895 – 12 June 1980
Continuing with my Egon Pearson posts in honor of his birthday, I reblog a post by Aris Spanos: “Egon Pearson’s Neglected Contributions to Statistics“. Egon Pearson (11 August 1895 – 12 June 1980), is widely known today for his contribution in recasting of Fisher’s significance testing into the NeymanPearson (1933) theory of hypothesis testing. …

209209.886301
It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:
• Complex adaptive system design (part 1), Azimuth, 2 October 2016. …

259456.886318
As Harvey Brown emphasizes in his book Physical Relativity, inertial motion in general relativity is best understood as a theorem, and not a postulate. Here I discuss the status of the “conservation condition”, which states that the energymomentum tensor associated with noninteracting matter is covariantly divergencefree, in connection with such theorems.

306866.886335
The spectrum argument purports to show that the betterthan relation is not transitive, and consequently that orthodox value theory is built on dubious foundations. The argument works by constructing a sequence of increasingly less painful but more drawnout experiences, such that each experience in the spectrum is worse than the previous one, yet the final experience is better than the experience with which the spectrum began. Hence the betterness relation admits cycles, threatening either transitivity or asymmetry of the relation. This paper examines recent attempts to block the spectrum argument, using the idea that it is a mistake to affirm that every experience in the spectrum is worse than its predecessor: an alternative hypothesis is that adjacent experiences may be incommensurable in value, or that due to vagueness in the underlying concepts, it is indeterminate which is better. While these attempts formally succeed as responses to the spectrum argument, they have additional, as yet unacknowledged costs that are significant. In order to effectively block the argument in its most typical form, in which the first element is radically inferior to the last, it is necessary to suppose that the incommensurability (or indeterminacy) is particularly acute: what might be called radical incommensurability (radical indeterminacy). We explain these costs, and draw some general lessons about the plausibility of the available options for those who wish to save orthodox axiology from the spectrum argument.

308194.886353
The need for expressing temporal constraints in conceptual models is wellknown, but it is unclear which representation is preferred and what would be easier to understand by modellers. We assessed five different modes of representing temporal constraints, being the formal semantics, Description logics notation, a codingstyle notation, temporal EER diagrams, and (pseudo)natural language sentences. The same information was presented to 15 participants in an experimental evaluation. Principally, it showed that 1) there was a clear preference for diagrams and natural language versus a dislike for other representations; 2) diagrams were preferred for simple constraints, but the natural language rendering was preferred for more complex temporal constraints; and 3) a multimodal modelling tool will be needed for the data analysis stage to be effective.

317094.886367
In this paper I discuss the delayed choice quantum eraser experiment by giving a straightforward account in standard quantum mechanics. At first glance, the experiment suggests that measurements on one part of an entangled photon pair (the idler) can be employed to control whether the measurement outcome of the other part of the photon pair (the signal) produces interference fringes at a screen after being sent through a double slit. Significantly, the choice whether there is interference or not can be made long after the signal photon encounters the screen. The results of the experiment have been alleged to invoke some sort of ‘backwards in time influences’. I argue that in the standard collapse interpretation the issue can be eliminated by taking into account the collapse of the overall entangled state due to the signal photon. Likewise, in the de BroglieBohm picture the particle’s trajectories can be given a welldefined description at any instant of time during the experiment. Thus, there is no need to resort to any kind of ‘backwards in time influence’. As a matter of fact, the delayed choice quantum eraser experiment turns out to resemble a Belltype measurement, and so there really is no mystery.

324480.886392
E.S. Pearson (11 Aug, 189512 June, 1980)
This is a belated birthday post for E.S. Pearson (11 August 189512 June, 1980). It’s basically a post from 2012 which concerns an issue of interpretation (longrun performance vs probativeness) that’s badly confused these days. …

368442.88641
There’s a new paper on the arXiv that claims to solve a hard problem:
• Norbert Blum, A solution of the P versus NP problem. Most papers that claim to solve hard math problems are wrong: that’s why these problems are considered hard. …

368445.886425
We owe to Frege in Begriffsschrift our modern practice of taking unrestricted quantification (in one sense) as basic. I mean, he taught us how to rephrase restricted quantifications by using unrestricted quantifiers plus connectives in the now familiar way, so that e.g. …

646267.886441
In this chapter, I will discuss what it takes for a dynamical collapse theory to provide a reasonable description of the actual world. I will start with discussions of what is required, in general, of the ontology of a physical theory, and then apply it to the quantum case. One issue of interest is whether a collapse theory can be a quantum state monist theory, adding nothing to the quantum state and changing only its dynamics. Although this was one of the motivations for advancing such theories, its viability has been questioned, and it has been argued that, in order to provide an account of the world, a collapse theory must supplement the quantum state with additional ontology, making such theories more like hiddenvariables theories than would first appear. I will make a case for quantum state monism as an adequate ontology, and, indeed, the only sensible ontology for collapse theories. This will involve taking dynamical variables to possess, not sharp values, as in classical physics, but distributions of values.

646292.886456
I discuss a gametheoretic model in which scientists compete to finish the intermediate stages of some research project. Banerjee et al. (2014) have previously shown that if the credit awarded for intermediate results is proportional to their difficulty, then the strategy profile in which scientists share each intermediate stage as soon as they complete it is a Nash equilibrium. I show that the equilibrium is both unique and strict. Thus rational creditmaximizing scientists have an incentive to share their intermediate results, as long as this is sufficiently rewarded.

696545.886472
Persistence judgments are ordinary judgments about whether an object survives a change, or perishes. For instance, if a house fire only superficially damages the kitchen, people judge that the house survived. But if the fire burnt the house to the ground instead, people judge that the house did not survive but was instead destroyed. We are interested in what drives these judgments, in part because objects are so central to our conception of the world, and our persistence judgments get to the very heart of the folk notion of an object.

700408.886486
In models for paraconsistent logics, the semantic values of sentences and their negations are less tightly connected than in classical logic. In “American Plan” logics for negation, truth and falsity are, to some degree, independent. The truth of ∼p is given by the falsity of p, and the falsity of ∼p is given by the truth of p. Since truth and falsity are only loosely connected, p and ∼p can both hold, or both fail to hold. In “Australian Plan” logics for negation, negation is treated rather like a modal operator, where the truth of ∼p in a situation amounts to p failing in certain other situations. Since those situations can be different from this one, p and ∼p might both hold here, or might both fail here.

779215.886501
Illustration by Slate
Last week a team of 72 scientists released the preprint of an article attempting to address one aspect of the reproducibility crisis, the crisis of conscience in which scientists are increasingly skeptical about the rigor of our current methods of conducting scientific research. …

819045.886517
Suppose that I am throwing a perfectly sharp dart uniformly randomly at a continuous target. The chance that I will hit the center is zero. What if I throw an infinite number of independent darts at the target? …

876901.886532
The claim of inflationary cosmology to explain certain observable facts, which the FriedmannRoberstonWalker models of ‘BigBang’ cosmology were forced to assume, has already been the subject of significant philosophical analysis. However, the principal empirical claim of inflationary cosmology, that it can predict the scaleinvariant power spectrum of density perturbations, as detected in measurements of the cosmic microwave background radiation, has hitherto been taken at face value by philosophers. The purpose of this paper is to expound the theory of density perturbations used by inflationary cosmology, to assess whether inflation really does predict a scaleinvariant spectrum, and to identify the assumptions necessary for such a derivation. The first section of the paper explains what a scaleinvariant powerspectrum is, and the requirements placed on a cosmological theory of such density perturbations. The second section explains and analyses the concept of the Hubble horizon, and its behaviour within an inflationary spacetime. The third section expounds the inflationary derivation of scaleinvariance, and scrutinises the assumptions within that derivation. The fourth section analyses the explanatory role of ‘horizoncrossing’ within the inflationary scenario.

876919.886546
In the context of superintelligent AI systems, the term “oracle” has two meanings. One refers to modular systems queried for domainspecific tasks. Another usage, referring to a class of systems which may be useful for addressing the value alignment and AI control problems, is a superintelligent AI system that only answers questions. The aim of this manuscript is to survey contemporary research problems related to oracles which align with longterm research goals of AI safety. We examine existing question answering systems and argue that their high degree of architectural heterogeneity makes them poor candidates for rigorous analysis as oracles. On the other hand, we identify computer algebra systems (CASs) as being primitive examples of domainspecific oracles for mathematics and argue that efforts to integrate computer algebra systems with theorem provers, systems which have largely been developed independent of one another, provide a concrete set of problems related to the notion of provable safety that has emerged in the AI safety community. We review approaches to interfacing CASs with theorem provers, describe welldefined architectural deficiencies that have been identified with CASs, and suggest possible lines of research and practical software projects for scientists interested in AI safety.

931130.886561
We give a precise semantics for a proposed revised version of the Knowledge Interchange Format. We show that quantification over relations is possible in a firstorder logic, but sequence variables take the language beyond firstorder.

1010481.886575
We report on progress and an unsolved problem in our attempt to obtain a clear rationale for relevance logic via semantic decomposition trees. Suitable decomposition rules, constrained by a natural parity condition, generate a set of directly acceptable formulae that contains all axioms of the wellknown system R, is closed under substitution and conjunction, satisfies the lettersharing condition, but is not closed under detachment. To extend it, a natural recursion is built into the procedure for constructing decomposition trees. The resulting set of acceptable formulae has many attractive features, but it remains an open question whether it continues to satisfy the crucial lettersharing condition.

1043861.886589
J. D. Hamkins and O, “The modal logic of settheoretic potentialism and the potentialist maximality principles.” (manuscript in preparation)
Citation arχiv
@ARTICLE{HamkinsLinnebo:Modallogicofsettheoreticpotentialism,
author = {Joel David Hamkins and {\O}ystein Linnebo},
title = {The modal logic of settheoretic potentialism and the potentialist maximality principles},
journal = {},
year = {},
volume = {},
number = {},
pages = {},
month = {},
note = {manuscript in preparation},
abstract = {},
keywords = {},
source = {},
eprint = {1708.01644},
archivePrefix = {arXiv},
primaryClass = {math.LO},
url = {http://jdh.hamkins.org/settheoreticpotentialism},
doi = {},
}
Abstract. …

1049951.886603
The standard propositional account of necessary and sufficient conditions in many introductory logic textbooks is based on the material conditional. Some examples include (BarkerPlummer, Barwise, and Etchemendy 2011: 181182), (Churchill 1986: 391392), (Forbes 1994: 2025), (Gabbay 2002: 68), (Haight 1999: 187189), (Halverson 1984: 285 286), (Hardegree 2011: 129), (Layman 2002: 250251), (Leblanc and Wisdom 1976: 1618), (Salmon 1984: 4748), (P. Smith 2003: 132), (Suppes 1957: 810) and (Watson and Arp 2015: 149). In the appendix, pertinent excerpts from some of these resources are provided. In general, the typical exposition goes along the following lines (again, cf. the appendix): • “A is sufficient for B” is best rendered as “if A, then B”, or symbolically, (A ⊃ B). • “A is necessary for B” is best rendered as ”if not A, then not B”, or symbolically, (¬A ⊃ ¬B). This is equivalent to (B ⊃ A).

1050046.886618
A central proposition of this book is that there are no universal rules for inductive inference. The chapters so far have sought to argue for this proposition and to illustrate it by showing how several popular accounts of inductive inference fail to provide universally applicable rules. Many in an influential segment of the philosophy of science community will judge these efforts to be mistaken and futile. In their view, the problem has been solved, finally and irrevocably.

1077503.886632
We propose an investigation of the ways in which speakers’ subjective perspectives are likely to affect the meaning of gradable adjectives like tall or heavy. We present the results of a study showing that people tend to use themselves as a yardstick when ascribing these adjectives to human figures of variable measurements: subjects’ height and weight requirements for applying tall and heavy are found to be positively correlated with their personal measurements. We draw more general lessons regarding the definition of subjectivity and the ways in which a standard of comparison and a significant deviation of that standard are specified.

1077551.886646
Recent ideas about epistemic modals and indicative conditionals in formal semantics have significant overlap with ideas in modal logic and dynamic epistemic logic. The purpose of this paper is to show how greater interaction between formal semantics and dynamic epistemic logic in this area can be of mutual benefit. In one direction, we show how concepts and tools from modal logic and dynamic epistemic logic can be used to give a simple, complete axiomatization of Yalcin’s [16] semantic consequence relation for a language with epistemic modals and indicative conditionals. In the other direction, the formal semantics for indicative conditionals due to Kolodny and MacFarlane [9] gives rise to a new dynamic operator that is very natural from the point of view of dynamic epistemic logic, allowing succinct expression of dependence (as in dependence logic) or supervenience statements. We prove decidability for the logic with epistemic modals and Kolodny and MacFarlane’s indicative conditional via a full and faithful computable translation from their logic to the modal logic K45.