We propose that the phenomenon of definite reduplication in Greek involves use of the definite determiner D as domain restrictor in the sense of Etxeberria & Giannakidou (2009). The use of D as a domain restricting function with quantifiers has been well documented for European languages such as Greek, Basque, Bulgarian and Hungarian and typically results in a partitive-like interpretation of the QP. We propose a unifying analysis that treats domain restriction and D-reduplication as the same phenomenon; and in our analysis, D-reduplication emerges semantically as similar to a partitive structure, a result resonating with earlier claims to this end by Kolliakou (2004). None of the existing accounts of definites can capture the correlations in the use of D with quantifiers and in reduplication that we establish here.
Some recent work has challenged two principles thought to govern the logic of the indicative conditional: modus ponens (Kolodny & MacFarlane 2010) and modus tollens (Yalcin 2012). There is a fairly broad consensus in the literature that Kolodny and Mac- Farlane’s challenge can be avoided, if the notion of logical consequence is understood aright (Willer 2012; Yalcin 2012; Bledin 2014). The viability of Yalcin’s counterexample to modus tollens has meanwhile been challenged on the grounds that it fails to take proper account of context-sensitivity (Stojnić forthcoming). This paper describes a new counterexample to modus ponens, and shows that strategies developed for handling extant challenges to modus ponens and modus tollens fail for it. It diagnoses the apparent source of the counterexample: there are bona fide instances of modus ponens that fail to represent deductively reasonable modes of reasoning.
Interpretive analogies between quantum mechanics and statistical mechanics are drawn out by attending to their common probabilistic structure and related to debates about primitive ontology and the measurement problem in quantum mechanics.
Seth Margolis, Daniel Ozer, Sonja Lyubomirsky, and I have designed a new measure of overall life satisfaction. We believe that this measure improves on the most widely used multi-item measure of life satisfaction, Diener et al. …
Assuming that the target of theory oriented empirical science in general and of nomic truth approximation in particular is to characterize the boundary or demarcation between nomic possibilities and nomic impossibilities, I have presented, in my article entitled “Models, postulates, and generalized nomic truth approximation” (Kuipers, 2016), the ‘basic’ version of generalized nomic truth approximation, starting from ‘two-sided’ theories. Its main claim is that nomic truth approximation can perfectly be achieved by combining two prima facie opposing views on theories: (1) the traditional (Popperian) view: theories are (models of) postulates that exclude certain possibilities from being realizable, enabling explanation and prediction and (2) the model view: theories are sets of models that claim to (approximately) represent certain realizable possibilities. Nomic truth approximation, i.e. increasing truth-content and decreasing falsity-content, becomes in this way revising theories by revising their models and/or their postulates in the face of increasing evidence.
Many philosophers hold that a rational person can have imprecise credences. A famous argument due to Adam Elga, however, purports to show that rationality requires that credences have precise values. I show that Elga’s argument can be evaded if we understand imprecise credences to be a case of vagueness.
Science posits a lot of probabilistic laws, including probabilistic laws of evolutionary biology, probabilistic laws of thermodynamics, and probabilistic laws of quantum mechanics, among others. The received view (among philosophers) about the probabilistic laws found in science is that they are laws about chances. "Chance" is of course a term of art; it refers to single-case objective probabilities that obey the Principal Principle (or something in the neighborhood ofthe Principal Principle). This view leaves us with two sets of philosophical problems concerning probabilistic laws: Problems about laws — the metaphysics oflawhood and the epistemology oflaws — and problems about chances — including the metaphysics of chance and its epistemology. There’s some hope that we might be able to elegantly solve both sets of problems at once — for example the way that David Lewis aims to, in his best system analysis of laws and chances together.1 But there are two sets of problems here, and there is no guarantee that what solves one will solve the other.
Inquiry into the meaning of logical terms in natural language (‘and’, ‘or’, ‘not’, ‘if’) has generally proceeded along two dimensions. On the one hand, semantic theories aim to predict native speaker intuitions about the natural language sentences involving those logical terms. On the other hand, logical theories explore the formal properties of the translations of those terms into formal languages. Sometimes, these two lines of inquiry appear to be in tension: for instance, our best logical investigation into conditional connectives may show that there is no conditional operator that has all the properties native speaker intuitions suggest if has.
Mayo (1996) begins the development of a general account of the epistemology of science, the “error-statistical philosophy of science” (ESPOS). The core commitments of ESPOS are that all scientific evidence takes the form of severe tests, and thatseverity requires low error probabilities. I examine the question of whether the basic commitments of ESPOS are compatible with a satisfactory account of the experimental testing of high-level theories. I argue that Mayo’s arguments for the affirmative are unconvincing: Not only are severe tests of high-level theories impossible, but the strategies Mayo proposes for learning about high-level theories via severe tests are not promising. I then propose a way of extending ESPOS to make possible a satisfactory treatment of the testing of theories.
Good’s Theorem is the apparent platitude that it is always rational to ‘look before you leap’: to gather (reliable) information before making a decision when doing so is free. We argue that Good’s Theorem is not platitudinous and may be false. And we argue that the correct advice is rather to ‘make your act depend on the answer to a question’. Looking before you leap is rational when, but only when, it is a way to do this.
The quest for quantum gravity has undergone a dramatic shift in focus and direction in recent years. This shift followed, and at the same time inspired and directly produced many important results, further supporting the new perspective. The purpose of this contribution is to outline this new perspective, and to clarify the conceptual framework in which quantum gravity should then be understood. We will emphasize how it differs from the traditional view and new issues that it gives rise to, and we will frame within it some recent research lines in quantum gravity. Both the traditional and new perspectives on quantum gravity are nicely captured in terms of a ‘diagram in the space of theoretical frameworks’. The traditional view can be outlined in correspondence with the ‘Bronstein cube’ of physical theories . The more modern perspective, we argue, is both to a deepening of this traditional view, and a broader framework, which we will outline using (somewhat light-heartedly) a ‘Bronstein hypercube’ of physical theories.
The philosophy of language since Frege has emphasized propositions and
declarative sentences, but it is clear that questions and
interrogative sentences are just as important. Scientific
investigation and explanation proceed in part through the posing and
answering of questions, and human-computer interaction is often
structured in terms of queries and answers. After going over some preliminaries we will focus on three lines of
work on questions: one located at the intersection of philosophy of
language and formal semantics, focusing on the semantics of what
Belnap and Steel (1976) call elementary questions; a second
located at the intersection of philosophy of language and philosophy
of science, focusing on why-questions and the notion of explanation;
and a third located at the intersection of philosophy of language and
epistemology, focusing on embedded or indirect questions.
Homotopy Type theory and its Model theory provide a novel formal semantic framework for representing scientific theories. This framework supports a constructive view of theories according to which a theory is essentially characterised by its methods. The constructive view of theories was earlier defended by Ernest Nagel and a number of other philosophers of the past but available logical means did not allow these people to build formal representational frameworks that implement this view.
Spontaneous collapse theories provide a promising solution to the measurement problem. But they also introduce a number of problems of their own. First, the primary explanatory entity of a collapse theory— the wave function— inhabits a high-dimensional space, rather than the three-dimensional space of experience. Second, the continuity of the wave function introduces a new and potentially problematic form of vagueness when used to describe discrete physical systems such as particles or marbles. Third, the collapse of the wave function is hard to reconcile with special relativity.
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof.
Kenny Courser and I have been working hard on this paper for months:
• John Baez and Kenny Courser, Coarse-graining open Markov processes. It may be almost done. So, it would be great if people here could take a look and comment on it! …
We are led, by these considerations, to a picture of the material world that has much more in common with the abstract realms of sets or of propositions than with the realms of concreta envisaged by the mereologist or by his “three-dimensional” opponent.
I just read something cool:
• Joel David Hamkins, Nonstandard models of arithmetic arise in the complex numbers, 3 March 2018. Let me try to explain it in a simplified way. I think all cool math should be known more widely than it is. …
The impetus theory of motion states that to be in motion is to have a non-zero velocity. The at-at theory of motion states that to be in motion is to be at different places at different times, which in classical physics is naturally understood as the reduction of velocities to position developments. I first defend the at-at theory against the criticism raised by Arntzenius that it renders determinism impossible. I then develop a novel impetus theory of motion that reduces positions to velocity developments. As this impetus theory of motion is by construction a mirror image of the at-at theory of motion, I claim that the two theories of motion are in fact epistemically on par—despite the unfamiliar metaphysical picture of the world furnished by the impetus version.
Take a mathematician of Frege’s generation, accustomed to writing the likes
(2) If , then or ,
— and fancier things, of course! Whatever unclear thoughts about ‘variables’ people may or may not have had once upon a time, they have surely been dispelled well before the 1870s, if not by Balzano’s 1817 Rein analytischer Beweis (though perhaps that was not widely enough read? …
We introduce the notion of a normative equilibrium as a method which brings harmony to "general equilibrium" like environments where individuals make preference-maximizing choices but not every profile of choices is feasible. In an equilibrium, norms stipulate what is permissable and what is forbidden. These uniform norms play a role analogous to that of price systems in competitive equilibrium and also feature some element of "fairness" since all individuals face the same choice set. The solution concept is a maximally permissive set of alternatives that is consistent with the existence of a profile of optimal choices which is feasible. Properties of the solution concept are analysed and the concept is applied to a variety of economic settings.
According to spacetime state realism (SSR), the fundamental ontology of a quantum mechanical world consists of a state-valued field evolving in 4- dimensional spacetime. One chief advantage it claims over rival wavefunction realist views is its natural compatibility with relativistic quantum field theory (QFT). I argue that the original density operator formulation of SSR cannot be extended to QFTs where the local observables form type III von Neumann algebras. Instead, I propose a new formulation of SSR in terms of a presheaf of local statespaces dual to the net of local observables studied by algebraic QFT.
The best justification of time-discounting is roughly that it is rational to care less about your more distant future because there is less of you around to have it. I argue that the standard version of this argument, which treats both psychological continuity and psychological connectedness as reasons to care about your future, can only rationalize an irrational—because exploitable—form of future discounting.
When theorizing about the a priori, philosophers typically deploy a sentential operator: ‘it is a priori that’. This operator can be combined with metaphysical modal operators, and in particular with ‘it is necessary that’ and ‘actually’ (in the standard, rigidifying sense) in a single argument or a single sentence. Arguments and theses that involve such combinations have had played a starring role in post-Kripkean metaphysics and epistemology. The phenomena the contingent a priori and the necessary a posteriori have been organizing themes in post-Kripkean discussions, and these phenomena cannot be easily discussed without using sentences and arguments that involve the interaction of the a priority, necessity, and actuality operators. However, there has been surprisingly little discussion of the logic of the interaction of these operators. In this paper we shall attempt to make some progress on that topic.
Causalists and Evidentialists can agree about the right course of action in an (apparent) Newcomb problem, if the causal facts are not as initially they seem. If declining $1,000 causes the Predictor to have placed $1m in the opaque box, CDT agrees with EDT that one-boxing is rational. This creates a difficulty for Causalists. We explain the problem with reference to Dummett’s work on backward causation and Lewis’s on chance and crystal balls. We show that the possibility that the causal facts might be properly judged to be non-standard in Newcomb problems leads to a dilemma for Causalism. One horn embraces a subjectivist understanding of causation, in a sense analogous to Lewis’s own subjectivist conception of objective chance. In this case the analogy with chance reveals a terminological choice point, such that either (i) CDT is completely reconciled with EDT, or (ii) EDT takes precedence in the cases in which the two theories give different recommendations. The other horn of the dilemma rejects subjectivism, but now the analogy with chance suggests that it is simply mysterious why causation so construed should constrain rational action.
This continues my previous post: “Can’t take the fiducial out of Fisher…” in recognition of Fisher’s birthday, February 17. I supply a few more intriguing articles you may find enlightening to read and/or reread on a Saturday night
Move up 20 years to the famous 1955/56 exchange between Fisher and Neyman. …
Shagrir () and Sprevak () explore the apparent necessity of representation for the individuation of digits (and processors) in computational systems. I will first offer a response to Sprevak’s argument that does not mention Shagrir’s original formulation, which was more complex. I then extend my initial response to cover Shagrir’s argument, thus demonstrating that it is possible to individuate digits in non-representational computing mechanisms. I also consider the implications that the non-representational individuation of digits would have for the broader theory of computing mechanisms.
This paper gives a definition of self-reference on the basis of the dependence relation given by Leitgeb (2005), and the dependence digraph by Beringer & Schindler (2015). Unlike the usual discussion about self-reference of paradoxes centering around Yablo’s paradox and its variants, I focus on the paradoxes of finitary characteristic, which are given again by use of Leitgeb’s dependence relation. They are called ‘locally finite paradoxes’, satisfying that any sentence in these paradoxes can depend on finitely many sentences. I prove that all locally finite paradoxes are self-referential in the sense that there is a directed cycle in their dependence digraphs. This paper also studies the ‘circularity dependence’ of paradoxes, which was introduced by Hsiung (2014). I prove that the locally finite paradoxes have circularity dependence in the sense that they are paradoxical only in the digraph containing a proper cycle. The proofs of the two results are based directly on Konig’s infinity lemma. In contrast, this paper also shows that Yablo’s paradox and its ∀∃-unwinding variant are non-self-referential, and neither McGee’s paradox nor the ω-cycle liar has circularity dependence.
When scientists seek further confirmation of their results, they often attempt to duplicate the results using diverse means. To the extent that they are successful in doing so, their results are said to be ‘robust’. This article investigates the logic of such ‘robustness analysis’ (RA). The most important and challenging question an account of RA can answer is what sense of evidential diversity is involved in RAs. I argue that prevailing formal explications of such diversity are unsatisfactory. I propose a unified, explanatory account of diversity in RAs. The resulting account is, I argue, truer to actual cases of RA in science; moreover, this account affords us a helpful new foothold on the logic undergirding RAs.
Ruetsche () claims that an abstract C*-algebra of observables will not contain all of the physically significant observables for a quantum system with infinitely many degrees of freedom. This would signal that in addition to the abstract algebra, one must use Hilbert space representations for some purposes. I argue to the contrary that there is a way to recover all of the physically significant observables by purely algebraic methods.