Couverture de The trouble with combining AI and psychology

The trouble with combining AI and psychology

The trouble with combining AI and psychology

Écouter gratuitement

Voir les détails

À propos de ce contenu audio

In their paper Combining Psychology with Artificial Intelligence: What could possibly go wrong?, cognitive scientists Iris van Rooij and Olivia Guest explore what happens when AI systems are treated as if they think like people. They examine how psychological research changes when these systems are used not just to mimic behaviour, but to explain it and what that shift reveals about the assumptions shaping both fields.Their argument matters because it’s becoming easy to assume that if a system talks, writes, or predicts like a person, it must understand like one too. This paper unpacks why that assumption is flawed, and what it reveals about the kinds of reasoning science is beginning to accept.Why the fusion of psychology and AI is epistemically dangerousThe fusion of psychology and AI, when approached without careful consideration, can disrupt our understanding of knowledge: how we formulate questions, construct theories, and determine what constitutes an explanation.The authors contend that this convergence can lead to errors that are more insidious than straightforward methodological mistakes. The core issue lies in how each field defines understanding and what types of outputs they consider as evidence. When these standards become blurred or diminished, distinguishing between a theory and a mere placeholder, or between a tool and the subject it is intended to study, becomes increasingly challenging.Psychology's research habits lower the bar for explanationPsychology, particularly in its mainstream experimental form, has been grappling with inherent structural weaknesses. The replication crisis is the most visible symptom, but there are deeper issues influencing research practices: * Hyperempiricism refers to the tendency to prioritise data collection and the identification of statistically significant effects, often at the expense of developing robust theories. The mere presence of an effect is often considered informative, even without an accompanying explanation.* Theory-light science describes a trend where researchers focus on how individuals perform specific tasks, without considering whether these tasks genuinely reflect broader cognitive capacities. The emphasis is on measurable outcomes rather than explanatory depth.* Statistical proceduralism reflects the field's inclination to address crises by implementing stricter protocols and enhancing statistical rigour, rather than pursuing conceptual reform. Practices such as pre-registration and replication enhance methodological rigour but fail to tackle fundamental questions about what constitutes a meaningful theory.These tendencies render the field vulnerable to what the authors term an "epistemic shortcut" - a shift in how knowledge claims are justified. Rather than developing and testing theoretical assumptions, researchers may start to treat system outputs as inherently explanatory. Consequently, if an AI system produces behaviour resembling human responses, it might be mistakenly viewed as a substitute for genuine cognition, even if the underlying mechanisms remain unexplored..AI imports assumptions that favour performance over understandingAI introduces its own assumptions that influence its approach to understanding, often rooted in engineering where success is measured by performance rather than explanation:* Makeism suggests that building something is key to understanding it. In practice, if an AI system replicates a behaviour, it's often assumed that the behaviour is explained. However, replication doesn't confirm the same underlying process.* Treating high-performing models as if they reveal the mechanisms behind behaviours is a common mistake. Even if a system performs well, it may not capture the essence of the phenomenon it mimics.* Performance metrics like benchmark results and predictive accuracy are frequently equated with scientific insight. High-scoring models are often deemed valid, even if their success is unclear or irrelevant to cognitive theory.* Hype cycles exacerbate these issues: commercial and reputational incentives encourage overstatement, making it easy to overlook constraints like computational intractability or multiple realisability, where different systems produce similar outputs differently.These factors foster a reasoning pattern where systems with superficially human-like behaviours are assumed to be cognitively equivalent to humans, often without examining the assumptions behind this equivalence.What goes wrong when these patterns reinforce each otherWhen psychology and AI are brought together without challenging these habits, their weaknesses can amplify each other. This can result in a number of epistemic errors:* Category errors, where AI systems are treated as if they are minds or cognitive agents.* Success-to-truth inferences, where good performance is taken as evidence that a system is cognitively plausible.* Theory-laundering, where the outputs of machine learning systems are framed as if they ...
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Aucun commentaire pour le moment