In March, I’ll be talking at Spencer Breiner‘s workshop on Applied Category Theory at the National Institute of Standards and Technology. I’ll be giving a joint talk with John Foley about our work using operads to design networks. …
The Gelukpa (or Geluk) tradition of Tibetan Buddhist philosophy is
inspired by the works of Tsongkhapa (1357–1419), who set out a
distinctly nominalist Buddhist tradition that differs sharply from
other forms of Buddhist thought not only in Tibet, but elsewhere in
the Buddhist world. The negative dialectics of the Middle Way
(madhyamaka) is the centerpiece of the Geluk intellectual
tradition and is the philosophy that is commonly held in Tibet to
represent the highest view. The Middle Way, a philosophy systematized
in the second century by Nāgārjuna, seeks to chart a
“middle way” between the extremes of essentialism and
nihilism with the notion of two truths: the ultimate truth of
emptiness and the relative truth of dependent existence.
Joseph Butler is best known for his criticisms of the hedonic and
egoistic “selfish” theories associated with Hobbes and
Bernard Mandeville and for his positive arguments that self-love and
conscience are not at odds if properly understood (and indeed promote
and sanction the same actions). In addition to his importance as a
moral philosopher Butler was also an influential Anglican theologian. Unsurprisingly his theology and philosophy were connected — his
main writings in moral philosophy were published sermons, a work of
natural theology, and a brief dissertation attached to that work. Although most of Butler’s moral arguments make rich use of passages
from scripture and familiar Christian stories and concepts, they make
little reference to — and depend little on the reader having
— any particular religious commitments.
Respect has great importance in everyday life. As children we are
taught (one hopes) to respect our parents, teachers, and elders,
school rules and traffic laws, family and cultural traditions, other
people's feelings and rights, our country's flag and leaders, the
truth and people's differing opinions. And we come to value respect
for such things; when we're older, we may shake our heads (or fists)
at people who seem not to have learned to respect them. We develop
great respect for people we consider exemplary and lose respect for
those we discover to be clay-footed, and so we may try to respect only
those who are truly worthy of our respect.
It is a striking fact from reverse mathematics that almost all theorems of countable and countably representable mathematics are equivalent to just five subsystems of second order arithmetic. The standard view is that the significance of these equivalences lies in the set existence principles that are necessary and sufficient to prove those theorems. In this article I analyse the role of set existence principles in reverse mathematics, and argue that they are best understood as closure conditions on the powerset of the natural numbers.
This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes.
Davidson’s well-known language skepticism—the claim that there is no such a thing as a language—has recognizably Gricean underpinnings, some of which also underlie his continuity skepticism—the claim that there can be no philosophically illuminating account of the emergence of language and thought. My first aim in this paper is to highlight aspects of the complicated relationship between central Davidsonian and Gricean ideas concerning language. After a brief review of Davidson’s two skeptical claims and their Gricean underpinnings, I provide my own take on how Davidson’s continuity skepticism can be resisted consistently with his rejection of the Gricean priority claim, yet without giving up some of Grice’s own insights regarding the origins of meaning.
. As part of the week of recognizing R.A.Fisher (February 17, 1890 – July 29, 1962), I reblog a guest post by Stephen Senn from 2012/2017. The comments from 2017 lead to a troubling issue that I will bring up in the comments today. …
17 February 1890–29 July 1962
Today is R.A. Fisher’s birthday. I’ll post some Fisherian items this week in honor of it. This paper comes just before the conflicts with Neyman and Pearson erupted. Fisher links his tests and sufficiency, to the Neyman and Pearson lemma in terms of power. …
There is a vast literature that seeks to uncover features underlying moral judgment by eliciting reactions to hypothetical scenarios such as trolley problems. These thought experiments assume that participants accept the outcomes stipulated in the scenarios. Across seven studies (N = 968), we demonstrate that intuition overrides stipulated outcomes even when participants are explicitly told that an action will result in a particular outcome. Participants instead substitute their own estimates of the probability of outcomes for stipulated outcomes, and these probability estimates in turn influence moral judgments. Our findings demonstrate that intuitive likelihoods are one critical factor in moral judgment, one that is not suspended even in moral dilemmas that explicitly stipulate outcomes. Features thought to underlie moral reasoning, such as intention, may operate, in part, by affecting the intuitive likelihood of outcomes, and, problematically, moral differences between scenarios may be confounded with non-moral intuitive probabilities.
The distribution of matter in our universe is strikingly time asymmetric. Most famously, the Second Law of Thermodynamics says that entropy tends to increase toward the future but not toward the past. But what explains this time-asymmetric distribution of matter? In this paper, I explore the idea that time itself has a direction by drawing from recent work on grounding and metaphysical fundamentality. I will argue that positing such a direction of time, in addition to time-asymmetric boundary conditions (such as the so-called “past hypothesis”), enables a better explanation of the thermodynamic asymmetry than is available otherwise.
Traditionally philosophical discussions on moral responsibility have
focused on the human components in moral action. Accounts of how to
ascribe moral responsibility usually describe human agents performing
actions that have well-defined, direct consequences. In today’s
increasingly technological society, however, human activity cannot be
properly understood without making reference to technological
artifacts, which complicates the ascription of moral responsibility
(Jonas 1984; Waelbers
2009).[ 1 ]
As we interact with and through these artifacts, they affect the
decisions that we make and how we make them (Latour 1992).
Adverbialist theories of thought such as those advanced by Hare (1969) and Sellars (1969) promise an ontologically sleek understanding of a variety of intentional states, but such theories have been largely abandoned due to the ‘many-property problem’. In an attempt to revitalize this otherwise attractive theory, in a series of papers as well as his recent book, Uriah Kriegel has offered a novel reply to the ‘many-property problem’ and on its basis he argues that ‘adverbialism about intentionality is alive and well’. If true, Kriegel will have shown that the logical landscape has long been unnecessarily constrained. His key idea is that the many-property problem can be overcome by appreciating that mental states stand in the determinable-determinate relation to one another. The present paper shows that this relation can’t save adverbialism because it would require thinkers to think more thoughts than they need be thinking.
It is often said that ‘what it is like’-knowledge cannot be acquired by consulting testimony or reading books [Lewis 1998; Paul 2014; 2015a]. However, people also routinely consult books like What It Is Like to Go to War [Marlantes 2014], and countless ‘what it is like’ articles and youtube videos, in the apparent hope of gaining knowledge about what it is like to have experiences they have not had themselves. This article examines this puzzle and tries to solve it by appealing to recent work on knowing-wh ascriptions. In closing I indicate the wider significance of these ideas by showing how they can help us to evaluate prominent arguments by Paul [2014; 2015a] concerning transformative experiences.
Autonomous agents are self-governing agents. But what is a
self-governing agent? Governing oneself is no guarantee that one will
have a greater range of options in the future, or the sort of
opportunities one most wants to have. Since, moreover, a person can
govern herself without being able to appreciate the difference between
right and wrong, it seems that an autonomous agent can do something
wrong without being to blame for her action. What, then, are the
necessary and sufficient features of this self-relation? Philosophers
have offered a wide range of competing answers to this question.
There are two standard responses to the discrepancy between observed galactic rotation curves and the theoretical curves calculated on the basis of luminous matter: postulate dark matter, or modify gravity. Most physicists accept the former as part of the concordance model of cosmology; the latter encompasses a family of proposals, of which MOND is perhaps the best-known example. Don Saari, however, claims to have found a third alternative: to explain this discrepancy as a result of approximation methods which are unfaithful to the underlying Newtonian dynamics. If he is correct, eliminating the problematic approximations should allow physicists and astronomers to preserve the validity of Newtonian dynamics in galactic systems without invoking dark matter.
We defend the many-worlds interpretation of quantum mechanics (MWI) against the objection that it cannot explain why measurement outcomes are predicted by the Born probability rule. We understand quantum probabilities in terms of an observer’s self-location probabilities. We formulate a probability postulate for the MWI: the probability of self-location in a world with a given set of outcomes is the absolute square of that world’s amplitude. We provide a proof of this postulate, which assumes the quantum formalism and two principles concerning symmetry and locality. We also show how a structurally similar proof of the Born rule is available for collapse theories. We conclude by comparing our account to the recent account offered by Sebens and Carroll.
The internet has made it easier than ever to speak to others. It has empowered individuals to publish our opinions without first convincing a media company of their commercial value; to find and share others' views without the fuss of photocopying and mailing newspaper clippings; and to respond to those views without the limitations of a newspaper letter page. …
I find the following line of thought to have a lot of intuitive pull:
Some mental states have great non-instrumental ethical significance. No physical brain states have that kind of non-instrumental ethical significance. …
Weak supplementation says that if x is a proper part of y, then y has a proper part that doesn’t overlap x. Suppose that we are impressed by standard counterexamples to weak supplementation like the following. …
In ‘Freedom and Resentment’ P. F. Strawson argues that reactive attitudes like resentment and indignation cannot be eliminated altogether, because doing so would involve exiting interpersonal relationships altogether. I describe an alternative to resentment: a form of moral sadness about wrongdoing that, I argue, preserves our participation in interpersonal relationships. Substituting this moral sadness for resentment and indignation would amount to a deep and far-reaching change in the way we relate to each other – while keeping in place the interpersonal relationships, which, Strawson rightfully believes, cannot be eliminated.
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Use of ‘representation’ pervades the literature in cognitive science? But, do representations actually play a role in cognitive-scientific explanation, or is such talk merely colorful commentary? Are, for instance, patterns of cortical activity in motion-sensitive visual area MT or strings of symbols in a language-processing parser genuine representations? Do they have content? And if they do, can a naturalist assign such contents in a well-motivated and satisfying way?
I’ve just realized that one can motivate belief in bare particulars as follows:
Constituent ontology of attribution: A thing has a quality if and only if that quality is a part of it. Universalism: Every plurality has a fusion. …
Louis Pierre Althusser (1918–1990) was one of the most
influential Marxist philosophers of the 20th Century. As
they seemed to offer a renewal of Marxist thought as well as to render
Marxism philosophically respectable, the claims he advanced in the
1960s about Marxist philosophy were discussed and debated
worldwide. Due to apparent reversals in his theoretical positions, to
the ill-fated facts of his life, and to the historical fortunes of
Marxism in the late twentieth century, this intense interest in
Althusser’s reading of Marx did not survive the 1970s. Despite the
comparative indifference shown to his work as a whole after these
events, the theory of ideology Althusser developed within it has been
broadly deployed in the social sciences and humanities and has
provided a foundation for much “post-Marxist”
In Japan, Confucianism stands, along with Buddhism, as a major religio-philosophical teaching introduced from the larger Asian cultural arena at the dawn of civilization in Japanese history, roughly the mid-sixth century. Unlike Buddhism which ultimately hailed from India, Confucianism was first and foremost a distinctly Chinese teaching. It spread, however, from Han dynasty China, into Korea, and then later entered Japan via, for the most part, the Korean peninsula. In significant respects, then, Confucianism is the intellectual force defining much of the East Asian identity of Japan, especially in relation to philosophical thought and practice.
Giacomo (Jacopo) Zabarella (b. 1533 in Padua, d. 1589 in Padua) is
considered the prime representative of Renaissance Italian
Aristotelianism. Known most of all for his writings on logic and
methodology, Zabarella was an alumnus of the University of Padua,
where he received his Ph.D. in philosophy. Throughout his teaching
career at his native university, he also taught philosophy of nature
and science of the soul (De anima). Among his main works are
the collected logical works Opera logica (1578) and writings
on natural philosophy, De rebus naturalibus (1590). Zabarella
was an orthodox Aristotelian seeking to defend the scientific status
of theoretical natural philosophy against the pressures emanating from
the practical disciplines, i.e., the art of medicine and anatomy.
Modern medicine is often said to have originated with 19th century germ theory, which attributed diseases to particular bacterial contagions. The success of this theory is often associated with an underlying principle referred to as the “doctrine of specific etiology,” which refers to the theory’s specificity at the level of disease causation or etiology. Despite the perceived importance of this doctrine the literature lacks a clear account of the types of specificity it involves and why exactly they matter. This paper argues that the 19th century germ theory model involves two types of specificity at the level of etiology. One type receives significant attention in the literature, but its influence on modern medicine has been misunderstood. A second type is present in this model, but it has been overlooked in the extant literature. My analysis clarifies how these types of specificity led to a novel conception of etiology, which continues to figure in medicine today.
The debate about the nature of knowledge-how is standardly thought to be divided between Intellectualist views, which take knowledge-how to be a kind of propositional knowledge, and Anti-Intellectualist views which take knowledge-how to be a kind of ability. In this paper, I explore a compromise position—the Interrogative Capacity view—which claims that knowing how to do something is a certain kind of ability to generate answers to the question of how to do it. This view combines the Intellectualist thesis that knowledge-how is a relation to a set of propositions with the Anti-Intellectualist thesis that knowledge-how is a kind of ability. I argue that this view combines the positive features of both Intellectualism and Anti-Intellectualism.
A number of people have been puzzled by the somewhat obscure arguments in my “Divine Creative Freedom” against a theistic modal realism on which God creates infinitely many worlds, and a proposition is possible if and only if it is true at one of them. …