1. 67084.892137
    It has been argued that inductive underdetermination entails that machine learning algorithms must be value-laden. This paper offers a more precise account of what it would mean for a “machine learning algorithm” to be “value-laden,” and, building on this, argues that a general argument from underdetermination does not warrant this conclusion.
    Found 18 hours, 38 minutes ago on PhilSci Archive
  2. 124983.892216
    Mathematics is the “language of nature,” a privileged mode of expression in science. We think it latches onto something essential about the physical universe, and we seek theories that reduce phenomena to mathematical laws. Yet, this attitude could not arise from the philosophies dominant before the early modern period. In orthodox Aristotelianism, mathematical categories are too impoverished to capture the causal structure of the world. In the revived Platonism of its opponents, the natural world is too corrupt to exemplify mathematical perfection. Modern mathematical science required a novel tertium quid, due to Pietro Catena.
    Found 1 day, 10 hours ago on PhilSci Archive
  3. 354744.89223
    The standard theory of choice in economics involves modelling human agents as if they had precise attitudes when in fact they are often fuzzy. For the normative purposes of welfare economics, it might be thought that the imposition of a precise framework is nevertheless well justified: If we think the standard theory is normatively correct, and therefore that agents ought to be in this sense precise, then doesn’t it follow that their true welfare can be measured precisely? I will argue that this thought, central to the preference purification project in behavioural welfare economics, commits a fallacy. The standard theory requires agents to adopt precise preferences; but neither the theory nor a fuzzy agent’s initial attitudes may determine a particular way in which she ought to precisify them. So before actually having precisified her preferences, the welfare of fuzzy agents may remain indeterminate. I go on to consider the implications of this fallacy for welfare economics.
    Found 4 days, 2 hours ago on Johanna Thoma's site
  4. 354764.892249
    The idea that people make mistakes in how they pursue their own best interests, and that we can identify and correct for these mistakes has been central to much recent work in behavioural economics, and the ‘nudge’ approach to public policy grounded on it. The focus in this literature has been on individual choices that are mistaken. Agreeing with, and building on the criticism that this literature has been too quick to identify individual choices as mistaken, I argue that it has also overlooked a kind of mistake that is potentially more significant: irreducibly diachronic mistakes, which occur when series of choices over time do not serve our interests well, even though no individual choice can be identified as a mistake. I argue for the claim that people make such mistakes, and reflect on its significance for welfare economics.
    Found 4 days, 2 hours ago on Johanna Thoma's site
  5. 355107.89226
    In her Choosing Well, Chrisoula Andreou puts forth an account of instrumental rationality that is revisionary in two respects. First, it changes the goalpost or standard of instrumental rationality to include “categorial” appraisal responses, alongside preferences, which are relational. Second, her account is explicitly diachronic, applying to series of choices as well as isolated ones. Andreou takes both revisions to be necessary for dealing with problematic choice scenarios agents with disorderly preferences might find themselves in. Focusing on problem cases involving cyclical preferences, I will first argue that her first revision is undermotivated once we accept the second. If we are willing to grant that there are diachronic rationality constraints, the preference-based picture can get us further than Andreou acknowledges. I will then turn to present additional grounds for rejecting the preference-based picture. However, these grounds also seem to undermine Andreou’s own appeal to categorial appraisal responses.
    Found 4 days, 2 hours ago on Johanna Thoma's site
  6. 467529.892271
    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.
    Found 5 days, 9 hours ago on Michael Klenk's site
  7. 643902.892281
    Sleeping Beauty, the renowned Bayesian reasoner, has enrolled in an experiment at the Experimental Philosophy Lab. On Sunday evening, she is put to sleep. On Monday, the experimenters awaken her. After a short chat, the experimenters tell her that it is Monday. She is then put to sleep again, and her memories of everything that happened on Monday are erased. The experimenters then toss a coin. If and only if the coin lands tails, the experimenters awaken her again on Tuesday. Beauty is told all this on Sunday. When she awakens on Monday – unsure of what day it is – what should her credence be that the coin toss on Monday lands heads?
    Found 1 week ago on PhilSci Archive
  8. 643991.892291
    The physical meaning of the operators is not reducible to the intrinsic relations of the quantum system, since unitary transformations can find other operators satisfying the exact same relations. The physical meaning is determined empirically. I propose that the assignment of physical meaning to operators spreads through observation, along with the values of the observables, from the already observed degrees of freedom to the newly observed ones. I call this process “physication”. I propose that quantum observations are nothing more than this assignment, which can be done unitarily. This approach doesn’t require collapse, many-worlds, or a conspiratorial fine tuning of the initial conditions.
    Found 1 week ago on PhilSci Archive
  9. 712259.892301
    I describe two candidate representations of a mixture. The first, which I call the standard representation, is not a good representation of a mixture in spite of its widespread popularity. The second, which I call Gibbs’s representation, is less widely adopted but is, I argue, a much better representation. I show that once we have a precise mathematical structure that can be used to represent thermodynamic systems, and once an adequate perspective on representation is adopted, Gibbs’s representation trumps the standard representation.
    Found 1 week, 1 day ago on Bryan W. Roberts's site
  10. 712296.89232
    This thesis proposes mathematically precise analyses of the concepts of identity and indistinguishability and explores their physical consequences in thermodynamics and statistical mechanics. I begin by exploring the philosophical consequences of the geometric formulation of thermodynamics, well-known to many mathematicians. Based on this, I offer novel accounts of what it means to be a thermodynamic system and what it means to be a composite system. I then use these mathematical tools to offer new and precise definitions of ‘mixture’ and ‘identity’ in thermodynamics. These analyses allow me to propose a novel resolution of Gibbs’ paradox. Finally, I offer a new definition of indistinguishability in statistical mechanics with a view to offering a new resolution of Gibbs’ paradox in statistical mechanics (the N ! problem). My analysis highlights the importance of observables in the foundations of statistical theories.
    Found 1 week, 1 day ago on Bryan W. Roberts's site
  11. 712328.89233
    This thesis investigates two kinds of conventionalism in the context of two issues in the philosophy of spacetime: the Einstein Algebra formulation of General Relativity (GR) and the status of simultaneity in special relativity. The outcome of the analysis is that these two cases pull in different directions: I take a step back and analyse the strategy of breaking underdetermination by the invocation of what is often thought of as “non-epistemic” virtues. I argue that certain such virtues are more epistemically relevant than previously thought, in particular where these virtues have to do with the ability of a theory to “point ahead” towards new theories. This conclusion is that the underdetermination between the two formulations of GR only prima facie requires breaking by convention. On the other hand, a careful appraisal of the relativistic limit of Minkowski spacetime leads to the conclusion that relativistic simulaneity is in a precise sense so conventional so as to be devoid of content.
    Found 1 week, 1 day ago on Bryan W. Roberts's site
  12. 759268.892339
    In this paper, we present an agent-based model for studying the impact of ‘myside bias’ on the argumentative dynamics in scientific communities. Recent insights in cognitive science suggest that scientific reasoning is influenced by ‘myside bias’. This bias manifests as a tendency to prioritize the search and generation of arguments that support one’s views rather than arguments that undermine them. Additionally, individuals tend to apply more critical scrutiny to opposing stances than to their own. Although myside bias may pull individual scientists away from the truth, its effects on communities of reasoners remain unclear. The aim of our model is two-fold: first, to study the argumentative dynamics generated by myside bias, and second, to explore which mechanisms may act as a mitigating factor against its pernicious effects. Our results indicate that biased communities are epistemically less successful than non-biased ones, and that they also tend to be less polarized than non-biased ones. Moreover, we find that two socio-epistemic mechanisms help communities to mitigate the effect of the bias: the presence of a common filter on weak arguments, which can be interpreted as shared beliefs, and an equal distribution of agents for each alternative at the start of the scientific debate.
    Found 1 week, 1 day ago on PhilSci Archive
  13. 911166.892349
    Comonotonicity (“same variation”) of random variables minimizes hedging possibilities and has been widely used, e.g., in Gilboa and Schmeidler’s ambiguity models. This paper investigates anticomonotonicity (“opposite variation”; abbreviated “AC”), the natural counterpart to comonotonicity. It minimizes leveraging rather than hedging possibilities. Surprisingly, AC restrictions of several traditional axioms do not give new models. Instead, they strengthen the foundations of existing classical models: (a) linear functionals through Cauchy’s equation; (b) Anscombe-Aumann expected utility; (c) as-if-risk-neutral pricing through no-arbitrage; (d) de Finetti’s bookmaking foundation of Bayesianism using subjective probabilities; (e) risk aversion in Savage’s subjective expected utility. In each case, our generalizations show where the critical tests of classical axioms lie: in the AC cases (maximal hedges). We next present examples where AC restrictions do essentially weaken existing axioms, and do provide new properties and new models.
    Found 1 week, 3 days ago on Peter P. Wakker's site
  14. 928698.892359
    A sentential connective is said to be univocal, relative to a formal system F for a sentential logic containing iff any two connectives 1 and 2 which satisfy the same F rules (and axioms) as are such that similar formulas involving ⋆ and ⋆2 are inter-derivable in F . To be more precise, suppose is a unary connective. Then is univocal relative to F iff for any 1 and 2 satisfying the same principles as in F, we have 1α ⊢F 2 . And, if is binary, then is univocal relative to F iff for any 1 and 2 satisfying the same principles as in F , we have α ⋆1 ⊢F α ⋆2 . In order to illustrate this definition of univocity, it is helpful to begin with a simple historical example.
    Found 1 week, 3 days ago on Branden Fitelson's site
  15. 988449.892368
    We set up a general framework for higher order probabilities. A simple HOP (Higher Order Probability space) consists of a probability space and an operation PR, such that, for every event A and every real closed interval A, PR(A ,A) is the event that A’s "true" probability lies in A. (The "true" probability can be construed here either as the objective probability, or the probability assigned by an expert, or the one assigned eventually in a fuller state of knowledge.) In a general HOP the operation PR has also an additional argument ranging over an ordered set of time-points, or, more generally, over a partially ordered set of stages; PR({A,t,A) is the event that A's probability at stage ¢ lies in 4. First we investigate simple HOPs and then the general ones. Assuming some intuitively justified axioms, we derive the most general structure of such a space. We also indicate various connections with modal logic.
    Found 1 week, 4 days ago on Haim Gaifman's site
  16. 1162953.892378
    Theories of ‘actual causation’ aim to provide an informative guide for assessing which events cause which others in circumstances where almost everything else is known: which other events occurred or did not occur, and how (if at all) the occurrence or non-occurrence of a particular event (regarded as values of a variable) can depend on speci c other events (or their absence), also regarded as values of variables. The ultimate aim is a theory that can agreeably be applied in causally fraught circumstances of technology, the law, and everyday life, where the identi cation of relevant features is not immediate and judgements of causation are entwined with judgements of moral or legal responsibility. Joseph Halpern’s Actual Causality is the latest and most extensive addition to this e ort, carried out in a tradition that holds causation to be di erence making.
    Found 1 week, 6 days ago on PhilSci Archive
  17. 1163175.892391
    Is philosophy of science best carried out at a ne-grained level, focusing on the theories and methods of individual sciences? Or is there still room for a general philosophy of science, for the study of philosophical questions about science as such? For Samuel Schindler, the answer to the last question is a resounding ‘yes!’, and his book Theoretical Virtues in Science is an unapologetic attempt to grapple with what he regards as three key questions for philosophy of science-in-general: What are the features—the virtues—that characterize good scienti c theories? What role do these virtues play in scienti c inquiry? And what do they allow us, as philosophers, to conclude about reality?
    Found 1 week, 6 days ago on PhilSci Archive
  18. 1214061.892408
    Expected value maximization gives plausible guidance for moral decision-making under uncertainty in many situations. But it has extremely unappetizing implications in ‘Pascalian’ situations involving tiny probabilities of extreme outcomes. This paper shows, first, that under realistic levels of ‘background uncertainty’ about sources of value independent of one’s present choice, a widely accepted and apparently innocuous principle—stochastic dominance—requires that prospects be ranked by the expected value of their consequences in most ordinary choice situations. But second, this implication does not hold when differences in expected value are driven by tiny probabilities of extreme outcomes. Stochastic dominance therefore lets us draw a surprisingly principled line between ‘ordinary’ and ‘Pascalian’ situations, providing a powerful justification for de facto expected value maximization in the former context while permitting deviations in the latter. Drawing this distinction is incompatible with an in-principle commitment to maximizing expected value, but does not require too much departure from decision-theoretic orthodoxy: it is compatible, for instance, with the view that moral agents must maximize the expectation of a utility function that is an increasing function of moral value.
    Found 2 weeks ago on Christian Tarsney's site
  19. 1278575.892425
    In this work we present an invariant-objective formalization of multi-screen entanglement grounded on Tensorial Quantum Mechanics (TQM) [12]. This new tensorial formulation of the theory of quanta —basically, an extension of Heisenberg’s matrix mechanics— allows not only to escape the many problems present in the current account of multi-partite entanglement grounded on the Dirac-Von Neumann Standard formulation of Quantum Mechanics (SQM) but, more importantly, to consistently represent entanglement phenomena when considering a multiplicity of different screens and detectors.
    Found 2 weeks ago on PhilSci Archive
  20. 1336492.892438
    Halvorson has proposed an intriguing example of a pair of theories whose categories are equivalent but which are not themselves definitionally equivalent. Moreover, it seems obvious that these theories are not equivalent in any intuitive sense. We offer a new topological proof that these theories are not definitionally equivalent. However, the underlying theorem for this claim has a converse that shows a surprising collection of theories, which are superficially similar to those in Halvorson’s example, turn out to be definitionally equivalent after all. This offers some new insight into what is going “wrong” in the Halvorson example.
    Found 2 weeks, 1 day ago on PhilSci Archive
  21. 1336550.892447
    The much-debated Reflection principle states that a coherent agent’s credences must match their estimates for their future credences. Defenders claim that there are Dutch-book arguments in its favor, putting it on the same normative footing as probabilistic coherence. Critics claim that those arguments rely on the implicit, implausible assumption that the agent is introspective : that they are certain what their own credences are. In this paper, we clarify this debate by surveying several different conceptions of the book scenario. We show that the crucial disagreement hinges on whether agents who are not introspective are known to reliably act on their credences: if they are, then coherent Reflection failures are (at best) ephemeral; if they aren’t, then Reflection failures can be robust— and perhaps rational and coherent. We argue that the crucial question for future debates is which notion of coherence makes sense for such unreliable agents, and sketch a few avenues to explore.
    Found 2 weeks, 1 day ago on PhilSci Archive
  22. 1451862.892459
    The application of Noether’s theorem to the exact SU(3) color symmetry of quantum chromodynamics results in the conservation of the color charge current. This current takes values in SU(3)’s Lie algebra, and it is therefore eight-dimensional. But how can this eight-dimensional space be the right mathematical object for the conservation of the three color charges red, blue, and green and their three corresponding anti-colors? We might have expected a six-dimensional space, or perhaps a nine-dimensional one, but eight is surprising. This paper answers this question through explicit construction of the SU(3) adjoint representation from the two fundamental representations of SU(3). This construction generates principled reasons for interpreting elements of the SU(3) Lie algebra as bearing combinations of color and anti-color. In light of this construction, this paper contrasts mathematical and conceptual features of color charge conservation with electric charge conservation, thereby highlighting some of the challenges and subtleties of interpreting non-Abelian gauge theories.
    Found 2 weeks, 2 days ago on PhilSci Archive
  23. 1451888.892469
    How do social factors affect group learning in diverse populations? Evidence from cognitive science gives us some insight into this question, but is generally limited to showing how social factors play out in small groups over short time periods. To study larger groups and longer time periods, we argue that we can combine evidence about social factors from cognitive science with agent-based models of group learning. In this vein, we demonstrate the usefulness of idealized models of inquiry, in which the assumption of Bayesian agents is used to isolate and explore the impact of social factors. We show that whether a certain social factor is beneficial to the community’s epistemic aims depends on its particular manifestation by focusing on the impacts of homophily – the tendency of individuals to associate with similar others – on group inquiry.
    Found 2 weeks, 2 days ago on PhilSci Archive
  24. 1451972.892482
    In conceptual debates involving the quantum gravity community, the literature discusses the so-called “emergence of space-time”. However, which interpretation of quantum mechanics could be coherent with such claim? We show that a modification of the Copenhagen Interpretation of quantum mechanics is compatible with the claim that space-time is emergent for the macroscopic world of measurements. In other words, pure quantum states do not admit space-time properties until we measure them. We call this approach “Achronotopic” (ACT) Interpretation of quantum mechanics, which yields a simple and natural interpretation of the most puzzling aspects of quantum mechanics, such as particle-wave duality, wave function collapse, entanglement, and quantum superposition. Our interpretation yields the same results in all measurements as the Copenhagen Interpretation, but provides clues toward the sub-Planckian physics. In particular, it suggests the non-existence of quantum gravity in the conventional sense understood as the quantization of a classical theory.
    Found 2 weeks, 2 days ago on PhilSci Archive
  25. 1503410.892492
    Thinking about Statistics by Jun Otsuka is a fine book, engagingly written and full of interesting details. Its subtitle, The Philosophical Foundations, might give the idea that we are dealing with philosophy of statistics, but the author makes clear that this would be a mistake: his aim is not to cover the “wealth of discussions concerning the theoretical ground of inductive inference, interpretations of probability, the everlasting battle between Bayesian and frequentist statistics, and so forth” (3). Nor is the book meant as an introduction, be it to statistics or to philosophy (ibid.), even though it contains lucid expositions on p-values, confidence levels, and significance tests, as well as instructive explanations of philosophical positions concerning probabilistic inference.
    Found 2 weeks, 3 days ago on Jun Otsuka's site
  26. 1561738.892502
    Stefan Riedener, Uncertain Values: An Axiomatic Approach to Axiological Uncertainty, De Gruyter, 2021, 167pp, $16.99, ISBN 978-3-11-073957-2. Stefan Riedener’s book is concerned with axiological uncertainty — that is, the problem of how to evaluate prospects given uncertainty about what is the correct axiology. For evaluations of this kind of meta value, Riedener uses the term ‘?-value’ (3). The main goal of the book is to provide an axiomatic argument for Expected Value Maximization, which is the view that an option ? has an at least as great?-value as an option ? if and only if ? has a greater expected value than ?, where the expected value of an option is a sum of the value of the option on each axiology weighted by one’s credence in the axiology (5).
    Found 2 weeks, 4 days ago on Johan E. Gustafsson's site
  27. 1576653.892514
    In recent years, there has been a proliferation of competing conceptions of what it means for a predictive algorithm to treat its subjects fairly. Most approaches focus on explicating a notion of group fairness, i.e. of what it means for an algorithm to treat one group unfairly in comparison to another. In contrast, Dwork et al. (2012) attempt to carve out a formalised conception of individual fairness, i.e. of what it means for an algorithm to treat an individual fairly or unfairly. In this paper, I demonstrate that the conception of individual fairness advocated by Dwork et al. is closely related to a criterion of group fairness, called ‘base rate tracking’, introduced in Eva (2022). I subsequently show that base rate tracking solves some fundamental conceptual problems associated with the Lipschitz criterion, before arguing that group level fairness criteria are at least as powerful as their individual level counterparts when it comes to diagnosing algorithmic bias.
    Found 2 weeks, 4 days ago on Benjamin Eva's site
  28. 1625246.892524
    I introduce a novel method for evaluating counterfactuals. According to the branchpoint proposal, counterfactuals are evaluated by ‘rewinding’ the universe to a time at which the antecedent had a reasonable probability of coming about and considering the probability for the consequent, given the antecedent. This method avoids surprising dynamics, allows the time of the branchpoint to be determined by the system’s dynamics (rather than by context) and uses scientific posits to specify the relevant probabilities. I then show how the branchpoint proposal can be justified by considering an evidential role for counterfactuals: counterfactuals help us reason about the probabilistic relations that hold in a hypothetical scenario at which the antecedent is maximally unsettled. A result is that we should distinguish the use of counterfactuals in contexts of control from their use for reasoning evidentially. Standard Lewisian accounts run into trouble precisely by expecting a single relation to play both roles.
    Found 2 weeks, 4 days ago on PhilSci Archive
  29. 1682977.892536
    We argue that it is neither necessary nor sufficient for a mathematical proof to have epistemic value that it be “correct”, in the sense of formalizable in a formal proof system. We then present a view on the relationship between mathematics and logic that clarifies the role of formal correctness in mathematics. Finally, we discuss the significance of these arguments for recent discussions about automated theorem provers and applications of AI to mathematics.
    Found 2 weeks, 5 days ago on PhilSci Archive
  30. 1764685.892546
    We’re stopping briefly to consider one of the “chestnuts” in the exhibits of “chestnuts and howlers” in Excursion 3 (Tour II) of my book Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (SIST). …
    Found 2 weeks, 6 days ago on D. G. Mayo's blog