-
3194562.988835
The photon is typically regarded as a unitary object that is both particle-discrete and wave- continuous. This is a paradoxical position and we live with it by making dualism a fundamental feature of radiation. It is argued here that the photon is not unitary; rather it has two identities, one supporting discrete behavior and the other supporting continuous (wave) behavior. There is photon kinetic energy that is always discrete/localized on arrival; it never splits (on half-silvered mirrors) or diffracts (in pinholes or slits). Then there is the photon s probability wavefront that is continuous and diffractable. Acknowledging that the photon has two identities explains the photon s dual nature. And wave-particle duality is central to quantum mechanics. Understanding it leads to new insights into the photon s constant velocity and its entanglement with another photon.
-
3194585.988957
The idea of using lattice methods to provide a mathematically well-defined formulation of realistic effective quantum field theories (QFTs) and clarify their physical content has gained traction in the last decades. In this paper, I argue that this strategy faces a two-sided obstacle: realistic lattice QFTs are (i) too different from their effective continuum counterparts even at low energies to serve as their foundational proxies and (ii) far from reproducing all of their empirical and explanatory successes to replace them altogether. I briefly conclude with some lessons for the foundations of QFT.
-
3194607.988981
Thought experiments (TEs) are indispensable conceptual tools in scientific research, particularly in the study of quantum gravity. Many scholars argue that the epistemic significance of TEs hinges on the proper and ineliminable use of imagination. However, there is disagreement regarding the specific nature of the imagination involved. A valuable perspective on this debate is provided by a TE proposed by Matvei Bronstein in 1936 to support a quantum theory of gravity. His contribution serves as a notable example of destructive TE, aiming to highlight the internal inconsistency within a unified theory of both quantum mechanics and general relativity. In this paper, I reconstruct Bronstein’s TE in the context of recent discussions on the relationship between TEs and imagination. I argue that this case study challenges existing epistemological frameworks for understanding TEs. I contend that Bronstein’s TE introduces a new form of imagination, termed operational imagination, as indispensable for reaching its intended conclusion. I conclude that operational imagination can be integrated into simulative model-based accounts of TEs.
-
3194629.988998
Process jargon is widespread in the physical sciences. Beginning with the work of Wesley Salmon, several accounts in philosophy of science have attempted to provide a definition of “process” compatible with scientists’ understanding of causation and explanation. The proposed characterisation links processes to the properties of the spacetime they inhabit as regards continuity and genuine causality. Recent developments in theories of quantum gravity challenge the validity of process ontologies at the fundamental scale. In particular, this paper examines how arguments based on minimal length in the literature question the traditional definition of process. Process realism does not favour the processualist against these arguments. I conclude that certain theories of quantum gravity prevent a processual representation of the intended phenomena at the fundamental scale because they predict a violation of either the spatiotemporal specification or the causality conditions. In the end, the processualist faces a dilemma: either weaken the accepted definition of process without falling into substance ontologies, or hope that problematic theories of quantum gravity will be disconfirmed.
-
3194677.989014
Machine learning is rapidly transforming how society and humans are quantified. Shared amongst some machine learning applications in the social and human sciences is the tendency to conflate concepts with their operationalization through particular tests or measurements. Existing scholarship reduces these equations of concept and operationalization to disciplinary naivety or negligence. This paper takes a close look at equations of concept and operationalization in machine learning predictions of poverty metrics. It develops two arguments. First, I demonstrate that conflations of concept and operationalization in machine learning poverty prediction cannot be reduced to naivety or negligence but can serve a strategic function. Second, I propose to understand this function in the context of philosophical and historical research on operationalism in the social sciences.
-
3194703.989029
The term ‘spontaneous’ appears in various contexts in modern physics, but it also has a long history in natural philosophy. Its Greek analogue to automaton is studied by Aristotle, and the Latin phrase sponte sua is used extensively by Lucretius. Peirce also introduces spontaneity in the context of his tychism. In this thesis we give a historical overview of these uses of spontaneity and compare them to spontaneity in thermodynamics and quantum mechanics. We examine the relation to quantum measurement. We argue that in the Copenhagen interpretation, no quantum event can be said to be truly spontaneous, but that true spontaneity does exist in spontaneous collapse theories. Finally we investigate the relation of spontaneity to randomness and indeterminism.
-
3242702.989045
When thinking about online speech, it’s tempting to start with questions like: What’s new here? Do online speech environments enable new types of speech acts, new semantic phenomena, new expressive effects? In other words, how has the shift to online speech fundamentally changed how we use language to communicate, coordinate, obfuscate, rouse, empower, disempower, insult, etc.? What hidden truths might online speech reveal about the nature of meaning and communication more broadly?
-
3247835.989067
In the last fifth of their interview, Adelstein and Huemer discuss my views. I now respond point by point. Adelstein:
And it just almost feels like there's something different going on when Bryan Caplan does moral reasoning than when I do. …
-
3294874.989081
In Part 4 we saw how the classical Kepler problem is connected to a particle moving on the 3-sphere, and how this illuminates the secret symmetry of the Kepler problem. There are various ways to quantize the Kepler problem and obtain a description of the hydrogen atom’s bound states as wavefunctions on the 3-sphere. …
-
3304990.989097
Joseph Heath presents his market failures approach to business ethics as a happy medium between cynicism and the idealism of traditional moral theories such as Kantian ethics, which Heath believes to be incompatible with important forms of competition. The market failures approach defends some real ethical limits in business, beyond following the law, but it condones certain deviations from the norms of everyday morality in the interest of economic efficiency. On this view, a certain level of sleaziness in business is permissible and inevitable, even if it is regrettable. This article argues that Kantian ethics provides a better account of the ethics of competition than the market failures approach does. Kantian ethics is in fact compatible with competition, both on the market and in the workplace. On some key issues, notably including the issue of truthfulness and disclosure, Kantian ethics permits competitive strategies that the market failures theory forbids. Moreover, when Kantian ethics deems the reasoning behind a competitive strategy morally acceptable, it endorses the strategy without any ethical reservations. There is no reason to regard justified business practices as regrettable or sleazy.
-
3330887.989111
Now that you’ve watched and/or read the Matthew Adelstein-Mike Huemer conversation on the ethics of insect suffering, I hope you’re ready to hear my reaction. I’m going to post this in two parts. In part 1, I dissect Adelstein and Huemer’s exchange with each other. …
-
3330888.989126
A decade ago, Effective Altruism got an early taste of bad PR when someone at an EA Global conference was widely reported as enthusing that EA was “the last social movement the world would ever need,” or words to that effect. …
-
3357341.98914
Історія логіки – актуальний напрямок досліджень в царині сучасного логічного знання. Такі розвідки, по-перше, сприяють створенню загальної картини еволюції логіки, усвідомленню змін предмета, що їх вона зазнавала як наука і як навчальна дисципліна, а також змін у парадигмальних принципах її історичного розвитку, засадничих правилах побудови логічних теорій та інструментарієві останніх. По-друге, історикологічні дослідження надають можливість виявити те, як логічні концепції впливали на інші наукові дисципліни, передусім філософію та математику. По-третє, історико-логічний аналіз дозволяє розглянути логічну позицію певного автора в широкому історико-філософському контексті, показати, як філософські ідеї впливали на розвиток логічного знання. По-четверте, дослідження в царині історії логіки допомагають розглянути її в широкому історико-культурному контексті, з’ясувати взаємовплив різних логічних поглядів та певних культурних традицій і особливостей історичних епох.
-
3357383.989156
We present a logic which deals with connexive exclusion. Exclusion (also called “co-implication”) is considered to be a propositional connective dual to the connective of implication. Similarly to implication, exclusion turns out to be non-connexive in both classical and intuitionistic logics, in the sense that it does not satisfy certain principles that express such connexivity. We formulate these principles for connexive exclusion, which are in some sense dual to the well-known Aristotle’s and Boethius’ theses for connexive implication. A logical system in a language containing exclusion and negation can be called a logic of connexive exclusion if and only if it obeys these principles, and, in addition, the connective of exclusion in it is asymmetric, thus being different from a simple mutual incompatibility of propositions. We will develop a certain approach to such a logic of connexive exclusion based on a semantic justification of the connective in question. Our paradigm logic of connexive implication will be the connexive logic C, and exactly like this logic the logic of connexive exclusion turns out to be contradictory though not trivial.
-
3405548.989171
When not in the courtroom, the Nuremberg prisoners were visited and interviewed by American psychiatrists. This poem is based on the interviews with Hermann Goering. (For more context, see Goering at Nuremberg, 1. …
-
3420954.989185
Over the last month or so, I’ve been been hammering out a new AI wager with Holden Karnofsky, co-founder of GiveWell and Open Philanthropy. I think of it as the “Feast-or-Famine” bet. Holden suspects that AI will dramatically change the global economy, but he’s undecided about whether the changes will be dramatically good or dramatically bad. …
-
3510241.9892
The Kepler problem is the study of a particle moving in an attractive inverse square force. In classical mechanics, this problem shows up when you study the motion of a planet around the Sun in the Solar System. …
-
3511743.989215
This paper proposes that the evolution of consciousness can be partially understood through increasingly complex forms of exploration. We trace how features such as integration, intentionality, temporality, and valence evolved as functional tools for dealing with uncertainty and contradiction. Central to this process is a shift from implicit to explicit representation, which we relate to established models of consciousness levels. Our approach emphasizes structural and functional continuity between these levels, while avoiding sharp thresholds or binary distinctions. Understood as exploration, consciousness supports what Stegmaier (2019) calls orientation, the achievement of finding one’s way in a changing environment by establishing temporary relevance and stability in conditions of uncertainty. We argue that exploration provides a productive framework for understanding how conscious capacities developed in response to situational demands. The account further raises questions about the conditions under which synthetic systems might replicate conscious capacities, highlighting the role of affect, embodiment, and representational structure in the evolution of conscious cognition.
-
3511765.98923
This paper introduces the Representational Uncertainty Principle (RUP) as a structural account of the limits of representational precision. We argue that as representations become more narrowly defined—by fixing more internal structure—they constrain the integration of perceptual and contextual cues. This often suppresses representational flexibility: the capacity to draw on multiple situational cues to stabilize meaning. When this flexibility is reduced, representational diffraction becomes more prominent: a structural phenomenon in which aspects of a situation are subsumed under a representation that deviates from the expected or standard framing, resulting in ambiguity or tension. Drawing on a structural analogy with quantum mechanics, we treat interference and diffraction as complementary manifestations of how representational content is formed. This framework explains why overly precise representations often fail in contexts that demand sensitivity to subtle variations. We support this account through examples of conceptual ambiguity and apparent contradiction, and by developing a framework that distinguishes between the structuring role of the representational vehicle and the dynamic process of integration that gives rise to content. The RUP thus highlights a structural tension between abstraction, context sensitivity, and the need for orientation within experience.
-
3511788.989246
I review the works of Gärdenfors (1990) and Scorzato (2013) and show that their combination provides an elegant solution of Goodman’s new riddle of induction. The solution is based on two main ideas: (1) clarifying what is expected from a solution: understanding that philosophy of science is a science itself, with the same limitations and strengths as other scientific disciplines; (2) understanding that the concept of complexity of a model’s assumptions and the concept of direct measurements must be characterized together. Although both measurements and complexity have been the subject of a vast literature, within the philosophy of science, essentially no other attempt has been made to combine them. The widespread expectation, among modern philosophers, that Goodman’s new riddle cannot be solved is clearly not defensible without serious exploration of such a natural approach. A clarification of this riddle has always been very important, but it has become even more crucial in the age of AI.
-
3588124.989261
Levy’s Upward Theorem says that the conditional expectation of an integrable random variable converges with probability one to its true value with increasing information. In this paper, we use methods from effective probability theory to characterise the probability one set along which convergence to the truth occurs, and the rate at which the convergence occurs. We work within the setting of computable probability measures defined on computable Polish spaces and introduce a new general theory of effective disintegrations. We use this machinery to prove our main results, which (1) identify the points along which certain classes of effective random variables converge to the truth in terms of certain classes of algorithmically random points, and which further (2) identify when computable rates of convergence exist. Our convergence results significantly generalize earlier results within a unifying novel abstract framework, and there are no precursors of our results on computable rates of convergence. Finally, we make a case for the importance of our work for the foundations of Bayesian probability theory.
-
3596706.989274
Optimalism holds that, of metaphysical necessity, the best world is actualized. There are two ways to understand “the best world”: (1) the best of all metaphysically possible worlds and (2) the best of all (narrowly) logically possible worlds. …
-
3661188.98929
What is the relation between the phenomenal properties of experience and physical properties, such as physical properties of the brain? I evaluate the proposal that phenomenal properties are determinables of physical realizer determinates, focusing Jessica Wilson’s response to a prominent argument for thinking that phenomenal properties cannot be understood in this way. Wilson premises her response on the idea that phenomenal properties admit of physical determination dimensions, which can be discovered through the relevant sciences. I provide several reasons for questioning this way of understanding the relation between the phenomenal and the physical, centered on the idea that even if phenomenal properties have physical determination dimensions, it remains to be shown that these determine the physical realizers of phenomenal properties, and provide reasons for denying that this is the case. I then address Wilson’s “powers-based conception” of the determinable/ determinate relation and argue that it faces difficulties both independent from and in relation to the view of phenomenal properties as determinables of physical realizer determinates.
-
3666407.989305
As I write in the spring of 2025, we are in the midst of a crisis in the United States. The crisis is economic, social, political, and legal. One dimension of this crisis is the attack on higher education by the Trump administration. To date, this attack has included: o Cuts to funding for existing federal grants to higher education o Substantive content restrictions on applications for new grants o Deporting, or canceling visas of, international students and scholars without due cause o Denial of entry into the United States of international scholars traveling for academic or research activities o Increases to the amount of overhead that universities must pay to support federal grants o Threats to increase endowment tax on universities from 1.4% to as much as 35% o Closure, or severe cuts to funding, of libraries, museums, and archives.
-
3673655.98934
Robert H. Jackson was an Associate Justice on the U.S. Supreme Court by day, and one of the most eloquent men alive. In 1945-46 he led the American prosecution at the Nuremberg Trials, “perhaps the greatest opportunity ever presented to an American lawyer.” His powerful opening statement testifies to the power of words, used well, to shape the meaning of events:
That four great nations, flushed with victory and stung with injury stay the hand of vengeance and voluntarily submit their captive enemies to the judgment of the law is one of the most significant tributes that Power ever has paid to Reason. …
-
3684523.989354
There are four well-known models of fundamental objective probabilistic reality: classical probability, comparative probability, non-Archimedean probability, and primitive conditional probability. I offer two desiderata for an account of fundamental objective probability, comprehensiveness and non-superfluity. It is plausible that classical probabilities lack comprehensiveness by not capturing some intuitively correct probability comparisons, such as that it is less likely that 0 = 1 than that a dart randomly thrown at a target will hit the exact center, even though both classically have probability zero. We thus want a comparison between probabilities with a higher resolution than we get from classical probabilities. Comparative and non-Archimedean probabilities have a hope of providing such a comparison, but for known reasons do not appear to satisfy our desiderata. The last approach to this problem is to employ primitive conditional probabilities, such as Popper functions, and then argue that P(0 = 1 | 0 = 1 or hit center) = < 1 = P (hit center | 0 = 1 or hit center). But now we have a technical question: How can we reconstruct a probability comparison, ideally satisfying the standard axioms of comparative probability, from a primitive conditional probability? I will prove that, given some plausible assumptions, it is impossible to perform this task: conditional probabilities just do not carry enough information to define a satisfactory comparative probability. The result is that of the models, no one satisfies our two desiderata. We end by briefly considering three paths forward.
-
3684551.989378
Weatherall and Manchak (2014) show that Reichenbachean universal effects, constrained to a rank-2 tensor field representation in the geodesic equation, always exist in non-relativistic gravity but not so for relativistic spacetimes. Thus general relativity is less susceptible to underdetermination than its Newtonian predecessor. Durr and Ben-Menahem (2022) argue these assumptions are exploitable as loopholes, effectively establishing a (rich) no-go theorem. I disambiguate between two targets of the proof, which have previously been conflated: the existence claim of at least one alternative geometry to a given one and Reichenbach’s (in)famous ”theorem theta”, which amounts to a universality claim that any geometry can function as an alternative to any other. I show there is no (rich) no-go theorem to save theorem theta. I illustrate this by explicitly breaking one of the assumptions and generalising the proof to torsionful spacetimes. Finally, I suggest a programmatic attitude: rather than undermining the proof one can use it to systematically and rigorously articulate stronger propositions to be proved, thereby systematically exploring the space of alternative spacetime theories.
-
3744375.989396
On All-False Open Futurism (AFOF), any future tensed statement about a future contingent must be false. It is false that there will be a sea battle tomorrow, for instance. Suppose now I realize that due to a bug, tomorrow I will be able to transfer ten million dollars from a client’s account to mine, and then retire to a country that won’t extradite me. …
-
3769401.989411
Very short summary: This essay discusses the distinction between the public and the private. This distinction has a particular value in liberal societies. I argue that publicity is a requirement of social morality. …
-
3769581.989426
We can now simulate environments containing vast numbers of agents engaging in complex interactions. Given projected advances in computing power, it is reasonable to expect that we will one day be able to create simulated agents that think and feel much as we do. One might doubt that sims will ever be able to feel, but the view that sims can be conscious has been defended by proponents of the simulation argument (Bostrom 2003; Chalmers 2022: ch. 5, ch. 15). Here, I assume that sims can be conscious . In what follows, I will use “simulations” to refer narrowly to simulations involving such agents, unless otherwise stated. Additionally, following Bostrom (2003), I will use “simulations” to refer to “ancestor simulations”—simulations of posthuman civilizations’ evolutionary histories, again unless otherwise stated. This focus is not due to the relevance of repeating details of history, but is meant rather to maintain focus on simulations whose scale and complexity resembles that of our universe.