1. 32051.51251
    The idea that diverse groups of ordinary citizens will “outperform” expert panels has become something of a totemic conviction in democratic theory. The “diversity trumps ability” (DTA) theorem, first formulated by the economists Lu Hong and Scott E. Page (2004), asserts that under certain conditions, diverse assemblies will find better solutions to complex problems than homogeneous groups of the best experts. This result has been taken up with much enthusiasm by political theorists, some of whom have taken it to prove the epistemic supremacy of democratic decision-making over its competitors (Landemore 2013). In debates with defenders of expertocratic and epistocratic, let alone autocratic, modes of decision-making,
    Found 8 hours, 54 minutes ago on Kai Spiekermann's site
  2. 43605.512706
    What does it mean to theorize about bounded rationality? Today’s post situates theories of bounded rationality against a competing Standard Picture that came to prominence during the middle of the twentieth century. …
    Found 12 hours, 6 minutes ago on The Brains Blog
  3. 69325.512731
    Human Technology Interaction Eindhoven University of Technology *[Some earlier posts by D. Lakens on this topic are listed at the end of part 2, forthcoming this week] How were we supposed to move beyond p < .05, and why didn’t we? …
    Found 19 hours, 15 minutes ago on D. G. Mayo's blog
  4. 221101.512741
    Confirmation bias has been widely studied for its role in failures of reasoning. Individuals exhibiting confirmation bias fail to engage with information that contradicts their current beliefs, and, as a result, can fail to abandon inaccurate beliefs. But although most investigations of confirmation bias focus on individual learning, human knowledge is typically developed within a social structure. We use network models to show that moderate confirmation bias often improves group learning. However, a downside is that a stronger form of confirmation bias can hurt the knowledge-producing capacity of the community.
    Found 2 days, 13 hours ago on Cailin O’Connor's site
  5. 297696.51275
    This chapter reviews empirical research on the rules governing assertion and retraction, with a focus on the normative role of truth. It examines whether truth is required for an assertion to be considered permissible, and whether there is an expectation that speakers retract statements that turn out to be false. Contrary to factive norms (such as the influential “knowledge norm”), empirical data suggests that there is no expectation that speakers only make true assertions. Additionally, contrary to truth-relativist accounts, there is no requirement for speakers to retract statements that are false at the context of assessment. We conclude by suggesting that truth still plays a crucial role in the evaluation of assertions: as a standard for evaluating their success, rather than permissibility.
    Found 3 days, 10 hours ago on PhilSci Archive
  6. 413263.512756
    In the 1960s, the demonstration of interference effects using two laser-beams raised the question: can two photons interfere? Its plausibility contested Dirac’s dictum, “Interference between two different photons never occurs”. Disagreements about this conflict led to a controversy. This paper will chart the controversy’s contour and show that it evolved over two phases. Subsequently, I investigate the reasons for its perpetuation. The controversy was initiated and fuelled by several misinterpretations of the dictum. I also argue that Dirac’s dictum is not applicable to two photon interference as they belong to different contexts of interference. Recognising this resolves the controversy.
    Found 4 days, 18 hours ago on PhilSci Archive
  7. 415982.512763
    If morality and self-interest don’t always coincide—if sometimes doing what’s right isn’t also best for you—morality can sometimes require you to do what will be worse for you or to forgo an act that would benefit you. But some philosophers think a reasonable morality can’t be too demanding in this sense and have proposed moral views that are less so.
    Found 4 days, 19 hours ago on Stanford Encyclopedia of Philosophy
  8. 460565.512769
    June 27, 2024 abstract. Causal decision theorists are vulnerable to a money pump if they update by conditioning when they learn what they have chosen. Nevertheless, causal decision theorists are immune to money pumps if they instead update by imaging on their choices and by conditioning on other things (and, in addition, evaluate plans rather choices). I also show that David Lewis’s Dutch-book argument for conditioning does not work when you update on your choices. Even so, a collective of causal decision theorists are still exploitable even if they start off with the same preferences and the same credences and will all see the same evidence. Evidential decision theorists who consistently update by conditioning are not exploitable in this way.
    Found 5 days, 7 hours ago on Johan E. Gustafsson's site
  9. 466544.512778
    Despite persistent misunderstandings to the contrary, standpoint theorists are not committed to an automatic privilege thesis (Wylie 2003, 27). According to an automatic privilege thesis, those who occupy marginalized social positions automatically know more, or know better, by virtue of their social location. The issues with this thesis are obvious: it is implausible; it offers no explanation of the connection between marginalized social location and epistemic advantage; and it cannot explain how it is that some marginalized individuals seem to (genuinely) buy into oppressive ideologies.
    Found 5 days, 9 hours ago on Philosopher's Imprint
  10. 528608.512803
    The field of Artificial Intelligence (AI) safety evaluations aims to test AI behavior for problematic capabilities like deception. However, some scientists have cautioned against the use of behavior to infer general cognitive abilities because of the human tendency to overattribute cognition to everything. They recommend the adoption of a heuristic to avoid these errors that states behavior provides no evidence for cognitive capabilities unless there is some theoretical feature present to justify that inference.
    Found 6 days, 2 hours ago on PhilSci Archive
  11. 528761.512812
    We define a notion of inaccessibility of a decision between two options represented by utility functions, where the decision is based on the order of the expected values of the two utility functions. The inaccessibility expresses that the decision cannot be obtained if the expectation values of the utility functions are calculated using the conditional probability defined by a prior and by partial evidence about the probability that determines the decision. Examples of inaccessible decisions are given in finite probability spaces. Open questions and conjectures about inaccessibility of decisions are formulated. The results are interpreted as showing the crucial role of priors in Bayesian taming of epistemic uncertainties about probabilities that determine decisions based on utility maximizing.
    Found 6 days, 2 hours ago on PhilSci Archive
  12. 528819.512821
    Postgraduate research training in the United Kingdom often narrowly focuses on domain-specific methods, neglecting wider philosophical topics such as epistemology and scientific method. Consequently, we designed a workshop on (inductive, deductive, and abductive) inference for postgraduate researchers. We ran the workshop three times with (N = 29) attendees from across four universities, testing the potential benefits of the workshop in a mixed-method, repeated measures design. Our core aims were to investigate what attendees learned from the workshop, and whether they felt it had impacted on their research practices six months later. Overall, learning inferential logic benefitted postgraduate researchers in various ways and to varying degrees. Six months on, roughly half of attendees reported being more critical of key aspects of research such as inferences and study design. Additionally, some attendees reported more subtle effects, such as prompting new lines of thought and inquiry. Given that self-criticism and scepticism are fundamental intellectual virtues, these results evidence the importance of embedding epistemological training into doctoral programmes across the UK.
    Found 6 days, 2 hours ago on PhilSci Archive
  13. 583271.512833
    Department of Statistical Sciences “Paolo Fortunati” University of Bologna [An earlier post by C. Hennig on this topic: Jan 9, 2022: The ASA controversy on P-values as an illustration of the difficulty of statistics] Statistical tests in five random research papers of 2024, and related thoughts on the “don’t say significant” initiative This text follows an invitation to write on “abandon statistical significance 5 years on”, so I decided to do a tiny bit of empirical research. …
    Found 6 days, 18 hours ago on D. G. Mayo's blog
  14. 625512.512844
    Artificial General Intelligence (AGI) is said to pose many risks, be they catastrophic, existential and otherwise. This paper discusses whether the notion of risk can apply to AGI, both descriptively and in the current regulatory framework. The paper argues that current definitions of risk are ill-suited to capture supposed AGI existential risks, and that the risk-based framework of the EU AI Act is inadequate to deal with truly general, agential systems.
    Found 1 week ago on Federico L. G. Faroldi's site
  15. 649401.512856
    On the occasion of the 7th International Conference on Economic Philosophy that we organized last month in Reims, we had two book sessions on recently published books dealing with the main topic of the conference, “market(s) and democracy.” One of the sessions was about Petr Špecián’s (Charles University) Behavioral Political Economy and Democratic Theory (Routledge, 2022) and the other discussed Lisa Herzog’s (University of Groningen) Citizen Knowledge. …
    Found 1 week ago on The Archimedean Point
  16. 708419.512865
    A common assumption in discussions of abilities is that phobias restrict an agent's abilities. Arachnophobics, for example, can't pick up spiders. I wonder if this is true, if we're talking about the pure 'can' of ability. …
    Found 1 week, 1 day ago on wo's weblog
  17. 730295.512873
    Discourse involving predicates of personal taste (PPT) such as ‘delicious,’ ‘disgusting,’ ‘fun,’ and ‘cool’ has been a focal point in a large, interdisciplinary body of research spanning the past 20 years. This research has shown that PPT are connected to numerous topics, including disagreement, meaning, context-sensitivity, subjectivity and objectivity, truth, aesthetic and gustatory taste, evaluation, speech acts, and so on. Researchers involved in the PPT debates have developed many subtle and inventive analyses of PPT, so that anyone interested in their behaviour must traverse a complex theoretical landscape. Despite the massive amount of work on the topic, there is a crucial methodological question about PPT that remains underexplored: what sorts of evidence should be called upon to evaluate an analysis of PPT? So far, most researchers have operated from the armchair, using their own intuitions about various linguistic phenomena to evaluate analyses of PPT. In recent years, however, certain philosophers and linguists have found this method wanting, noting that hypotheses about PPT are empirical, and thus need to be evaluated empirically.
    Found 1 week, 1 day ago on Jeremy Wyatt's site
  18. 816954.512883
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. For most users, current LLMs are black boxes, i.e., for the most part, they lack data transparency and algorithmic transparency. They can, however, be phenomenologically and informationally transparent, in which case there is an interactional flow. Anthropomorphising and interactional flow can, in some users, create an attitude of (unwarranted) trust towards the output LLMs generate. We conclude this paper by drawing on the epistemology of trust and testimony to examine the epistemic implications of these dimensions. Whilst LLMs generally generate accurate responses, we observe two epistemic pitfalls. Ideally, users should be able to match the level of trust that they place in LLMs to the degree that LLMs are trustworthy. However, both their data and algorithmic opacity and their phenomenological and informational transparency can make it difficult for users to calibrate their trust correctly. The effects of these limitations are twofold: users may adopt unwarranted attitudes of trust towards the outputs of LLMs (which is particularly problematic when LLMs hallucinate), and the trustworthiness of LLMs may be undermined.
    Found 1 week, 2 days ago on Matteo Colombo's site
  19. 831765.512891
    What should morally conscientious agents do if they must choose among options that are somewhat right and somewhat wrong? Should you select an option that is right to the highest degree, or would it perhaps be more rational to choose randomly among all somewhat right options? And how should lawmakers and courts address behavior that is neither entirely right nor entirely wrong? In this first book-length discussion of the “gray area” in ethics, Martin Peterson challenges the assumption that rightness and wrongness are binary properties. He argues that some acts are neither entirely right nor entirely wrong, but rather a bit of both. Including discussions of white lies and the permissibility of abortion, Peterson’s book presents a gradualist theory of right and wrong designed to answer pressing practical questions about the gray area in ethics.
    Found 1 week, 2 days ago on Martin Peterson's site
  20. 857503.512899
    Suppose that an informant (test, expert, device, perceptual system, etc.) is unlikely to err when pronouncing on a particular subject matter. When this is so, it might be tempting to defer to that informant when forming beliefs about that subject matter. How is such an inferential process expected to fare in terms of truth (leading to true beliefs) and evidential fit (leading to beliefs that fit one’s total evidence)? Using a medical diagnostic test as an example, we set out a formal framework to investigate this question. We establish seven results and make one conjecture. The first four results show that when the test’s error probabilities are low, the process of deferring to the test can score well in terms of (i) both truth and evidential fit, (ii) truth but not evidential fit, (iii) evidential fit but not truth, or (iv) neither truth nor evidential fit. Anything is possible. The remaining results and conjecture generalize these results in certain ways. These results are interesting in themselves—especially given that the diagnostic test is not sensitive to the target disease’s base rate—but also have broader implications for the more general process of deferring to an informant. Additionally, our framework and diagnostic example can be used to create test cases for various reliabilist theories of inferential justification. We show, for example, that they can be used to motivate evidentialist process reliabilism over process reliabilism.
    Found 1 week, 2 days ago on Michael Roche's site
  21. 931712.512905
    Ordinal, interval, and ratio scales are discussed and arguments for the thesis that “better than” comparisons reside on interval or ratio scales are laid out. It is argued that linguistic arguments are not conclusive since alternative rank-based definitions can be given, and that in general “better than” comparisons do not have a common scale type. Some comparison dimensions reside on ratio scales, whereas others do not show any evidence of lying on a scale stronger than an ordinal scale.
    Found 1 week, 3 days ago on Erich Rast's site
  22. 936661.512911
    I show really be done with Integrated Information Theory (IIT), in Aaronson’s simplified formulation, but I noticed a rather interesting difficult. In my previous post on the subject, I noticed that a double grid system where there are two grids stacked on top of one another, with the bottom grid consisting of inputs and the upper grid of outputs, and each upper value being the logical OR of the (up to) five neighboring input values will be conscious according to IIT if all the values are zero and the grid is large enough. …
    Found 1 week, 3 days ago on Alexander Pruss's Blog
  23. 990510.512917
    Scientists have the epistemic responsibility of producing knowledge. They also have the social responsibility of aligning their research with the needs and values of various societal stakeholders. Individual scientists may be left with no guidance on how to prioritise and carry these different responsibilities. As I will argue, however, the responsibilities of science can be harmonised at the collective level. Drawing from debates in moral philosophy, I will propose a theory of the collective responsibilities of science that accounts for the internal diversity of research groups and for their different responsibilities.
    Found 1 week, 4 days ago on PhilSci Archive
  24. 1134238.512929
    I hope this is my last post for a while on Integrated Information Theory (IIT), in Aaronson’s simplified formulation. One of the fun and well-known facts is that if you have an impractically large square two-dimensional grid of interconnected logic gates (presumably with some constant time-delay in each gate between inputs and outputs to prevent race conditions) in a fixed point (i.e., nothing is changing), the result can still have a degree of integrated information proportional to the square root of the number of gates. …
    Found 1 week, 6 days ago on Alexander Pruss's Blog
  25. 1143870.512939
    There’s a straightforward sense in which we ought to do whatever we have (all things considered) most reason to do. But permissibility is a laxer notion than this. Conceptually, it may be permissible to do less than what we have most reason to do. …
    Found 1 week, 6 days ago on Good Thoughts
  26. 1152499.512944
    As I’m writing the final words of my manuscript and – hopefully – would-be book tentatively titled “Social Choice and Public Reason,” I’ve been rereading some classics of social choice theory to find some material relevant for the general introduction. …
    Found 1 week, 6 days ago on The Archimedean Point
  27. 1205977.51295
    Recent research indicates gender differences in the impact of stress on decision behavior, but little is known about the brain mechanisms involved in these gender-specific stress effects. The current study used functional magnetic resonance imaging (fMRI) to determine whether induced stress resulted in gender-specific patterns of brain activation during a decision task involving monetary reward. Specifically, we manipulated physiological stress levels using a cold pressor task, prior to a risky decision making task. Healthy men (n ¼ 24, 12 stressed) and women (n ¼ 23, 11 stressed) completed the decision task after either cold pressor stress or a control task during the period of cortisol response to the cold pressor. Gender differences in behavior were present in stressed participants but not controls, such that stress led to greater reward collection and faster decision speed in males but less reward collection and slower decision speed in females. A gender-by-stress interaction was observed for the dorsal striatum and anterior insula. With cold stress, activation in these regions was increased in males but decreased in females. The findings of this study indicate that the impact of stress on reward-related decision processing differs depending on gender.
    Found 1 week, 6 days ago on Mara Mather's site
  28. 1408285.512955
    I’m still thinking about Integrated Information Theory (IIT), in Aaronson’s simplified formulation. Aaronson’s famous criticisms show pretty convincingly that IIT fails to correctly characterize consciousness: simple but large systems of unchanging logic gates end up having human-level consciousness on IIT. …
    Found 2 weeks, 2 days ago on Alexander Pruss's Blog
  29. 1567817.512961
    When Kuhn first published his Structure of Scientific Revolutions he was accused of promoting an “irrationalist” account of science. Although it has since been argued that this charge is unfair in one aspect or another, the early criticism still exerts an influence on our understanding of Kuhn. In particular, normal science is often characterized as dogmatic and uncritical, even by commentators sympathetic to Kuhn. I argue not only that there is no textual evidence for this view but also that normal science is much better understood as being based on epistemically justified commitment to a paradigm and as pragmatic in its handling of anomalies. I also argue that normal science is an example of what I call Kuhn’s program of revisionary rational reconstruction.
    Found 2 weeks, 4 days ago on PhilSci Archive
  30. 1567837.512966
    In this paper I issue a challenge to what I call the “Independence Thesis of Theory Assessment” (ITTA). According to ITTA, the evidence for (or against) a theory must be assessed independently from the theory explaining the evidence. I argue that ITTA is undermined by cases of evidential uncertainty, in which scientists have been guided by the explanatory power of their theories in the assessment of the evidence. Instead, I argue, these cases speak in favor of a model of theory assessment in which explanatory power may indeed contribute to the stabilization of the evidential basis.
    Found 2 weeks, 4 days ago on PhilSci Archive