Épisodes

  • Ep. 15: How AI tools erode critical thinking through cognitive offloading
    Jul 18 2025

    Most people don’t think about their thinking tools. When AI systems offer explanations, solve problems, or generate decisions, the convenience is obvious but there’s a cognitive cost: the more we offload reasoning to machines, the less we engage in it ourselves.

    As always, there are two sides to everything - this is in alternative view to last week’s episode on the positive aspects of expanding our minds with Generative AI.

    Key points

    * Frequent use of AI tools is associated with lower critical thinking skills through a mechanism called cognitive offloading - delegating reasoning tasks to external systems

    * Trust in AI increases the likelihood of offloading and reinforces the habit, while education mitigates the effect - but only when it fosters active cognitive engagement.

    * Over time, offloading reduces cognitive resilience and weakens independent judgement.

    Source: Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. (open access)



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Afficher plus Afficher moins
    12 min
  • Ep. 14: Extending minds with generative AI
    Jul 11 2025

    Much of the public conversation around AI centres on its outputs: what it can generate, how well it performs, what tasks it might take over. Those questions often obscure a more foundational shift that AI systems are becoming embedded in how people think - not just as occasional tools, but as part of the cognitive process itself.

    A recent paper by Andy Clark (Nature Communications, May 2025) situates this shift within a broader cognitive history. Clark is best known for the “extended mind” hypothesis, which argues that human thinking routinely spans across brain, body, and environment. In this article, he applies that lens to generative AI, treating it not as a foreign agent but as a new layer in an already distributed system.

    Key points:

    * Human cognition has always relied on external tools; generative AI continues this pattern of extension.

    * The impact of AI depends on how it is integrated into the thinking process - not just on what it can produce.

    * Clark introduces the idea of “extended cognitive hygiene” - a new skillset for navigating AI-supported reasoning.

    Source: Clark, A. (2025). Extending minds with Generative AI. Nature Communications, 16(1), 1-4. (Open Access)



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Afficher plus Afficher moins
    13 min
  • Ep. 13: How GenAI roles shape perceptions of value in human-AI collaboration
    Jun 13 2025

    When we talk about AI collaboration, the question is usually whether AI was used or not. This binary misses something crucial about how humans actually experience working with generative systems. The question is not just about whether AI was involved but also when and how it participated in the creative process.

    A recent study suggests that when AI generates first drafts versus provides feedback fundamentally changes how people perceive their creative contribution. When AI starts the work, humans feel like editors rather than creators, even when doing substantial revision. But when humans start and AI refines, the final output feels both higher quality and more authentically their own. The twist? People expect others to devalue AI-enhanced work, creating a tension between internal pride and external credibility. This isn't just about tools - it's about how role assignment shapes creative ownership and the social meaning of AI collaboration.

    Source: Schecter, A., & Richardson, B. (2025, April). How the Role of Generative AI Shapes Perceptions of Value in Human-AI Collaborative Work. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-15). (open access)



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Afficher plus Afficher moins
    12 min
  • Ep. 12: The Cyborg Behavioral Scientist
    May 31 2025
    Tomaino, Cooke and Hoover used ChatGPT-4, Bing Copilot, and Google Gemini to execute an entire research project from initial idea to final manuscript. They documented what these systems accomplished and where they failed across six stages of academic work. This paper is a reflective, empirical probe into the limits of AI as a research collaborator. It offers a clear-eyed diagnosis of what’s currently possible, what’s still missing, and why the human researcher remains essential not just for quality, but for meaning.TL;DR? AI can mimic scientific work convincingly while fundamentally misunderstanding what makes it meaningful.Article: Tomaino, G., Cooke, A. D. J., & Hoover, J. (2025). AI and the advent of the cyborg behavioral scientist. Journal of Consumer Psychology, 35, 297–315. Available at SSRN. Detailed notes1. Purpose and setupThe paper sets out to examine whether Large Language Models (LLMs) can meaningfully perform the tasks involved in behavioural science research. Rather than speculate, the authors designed a practical test: conduct an entire behavioural research project using AI tools (ChatGPT-4, Bing Copilot, and Google Gemini) at every stage where possible. Their goal was to document what these systems can do, where they fall short, and what that tells us about the evolving relationship between AI and human thought in knowledge production. They call this process the “cyborg behavioural scientist” model where AI and human roles are blended, but with minimal human intervention wherever feasible.They assessed AI performance across six canonical research stages:* Ideation* Literature Review* Research Design* Data Analysis* Extensions (e.g., follow-up studies)* Manuscript Writing2. IdeationThe ideation phase tested whether LLMs could generate viable research questions. The authors used prompting sequences to elicit possible topics in consumer behaviour and asked the AIs to propose empirical research directions.Findings:* AIs provided broad, somewhat vague suggestions (e.g. “Digital consumption and mental health”), which lacked the specificity required for testable hypotheses.* When asked to generate more focused ideas within a chosen theme (“ethical consumption”), the outputs improved. The researchers selected a concept called “ethical fatigue” - the idea that overexposure to ethical branding messages could dull their persuasive effect.* To get from general territory to a research-ready idea required multiple layers of human-guided refinement. The AI could not identify research gaps or develop theoretically sound rationales.Conclusion: LLMs can function as brainstorming partners, surfacing domains of interest and initial directions, but they lack the epistemic grip to generate research questions that are original, tractable, and well-positioned within the literature. 3. Literature reviewOnce a topic was selected, the authors asked the AIs to identify relevant literature, assess novelty, and suggest theoretical foundations.Findings:* The AIs failed to access or cite relevant academic literature. Most references were hallucinated, incorrect, or drawn from superficial sources.* The models often praised the research idea without offering critical evaluation or theoretical positioning.* The inability to access closed-access journals was a major barrier. Even when articles were available, the AIs rarely retrieved or interpreted them meaningfully.Conclusion: AI cannot currently perform reliable literature reviews - its lack of access, weak interpretive depth, and tendency to hallucinate references make this stage unsuitable for unsupervised delegation.4. Research designThe AIs were tasked with designing an experiment to test the hypothesis that ethical branding becomes less effective when consumers are overexposed to it.Findings:* The AI-generated designs were broadly plausible but flawed. Some included basic confounds (e.g. varying both message frequency and content type simultaneously).* With human corrections (e.g. balancing exposure conditions, clarifying manipulations), the designs became usable.* Stimuli generation (e.g. ethical vs. non-ethical brand statements) was one of the strongest areas for AI—responses were realistic, targeted, and ready for use.* The AIs failed to produce usable survey files in Qualtrics’ native format (QSF). ChatGPT attempted it, but the output didn’t meet schema requirements.Conclusion: AI shows potential as a design assistant, especially for stimulus creation and generating structural ideas, but human researchers must ensure validity, feasibility, and proper implementation, so technical execution remains limited.5. Data analysisHere, the authors uploaded actual survey data and asked the AIs to perform statistical analysis.Findings:* Gemini could not handle data uploads, so only ChatGPT and Bing were tested.* Both AIs recognised the appropriate statistical test (ANOVA), and produced plausible-looking outputs.* However, the reported statistics...
    Afficher plus Afficher moins
    25 min
  • Ep. 11: Rethinking responsible AI through human rights
    May 29 2025
    This episode looks at a paper that makes a quiet but important shift in how responsible AI is framed. The authors argue that instead of building ethical principles from scratch, we should start from the human rights frameworks that already exist. These frameworks are familiar in law, politics, and civil society but less so in AI design.The paper suggests that using human rights as a reference point helps clarify what’s at stake. It draws attention to whose interests are being protected, which harms are made visible, and where accountability sits when systems cause harm. Rather than focusing on technical metrics, the rights framing asks how AI systems interact with people’s ability to speak, act, or be heard—and how those interactions are shaped by context, culture, and power.In this episode, we explore how that shift changes what is noticed, who is included, and how responsibility is structured. We also reflect on where behavioural science intersects with these ideas—especially in shaping attention, perceived legitimacy, and the ways people interpret fairness in system-driven environments.The authors bring a wide range of experience from both inside and outside the tech industry. K. Sabeel Rahman (formerly of the Office of Science and Technology Policy), Margaret Mitchell (then at Hugging Face), Timnit Gebru (founder of DAIR, the Distributed AI Research Institute), and Iason Gabriel (a research scientist at DeepMind) have each worked at the intersection of AI ethics, governance, and civil rights. Prabhakaran, V., Mitchell, M., Gebru, T., & Gabriel, I. (2022). A human rights-based approach to responsible AI. arXiv preprint arXiv:2210.02667.Companion reflection: A framework that reframes the ethical questionThis paper proposes something deceptively simple: that AI ethics would benefit from rooting its values not in abstract principles or technical ideals, but in the already-contested terrain of human rights. Written by researchers across DAIR, DeepMind, Hugging Face, and Mozilla, it reframes responsible AI not as a matter of value specification, but of rights protection. The argument is that systems should be assessed not by what they claim to optimise, but by what they risk displacing, especially in relation to the people most likely to be harmed.At its core, this isn’t just a legal or philosophical shift. It’s a cognitive one. The paper asks us to move from thinking about system properties to recognising patterns of harm and redirecting ethical attention from the internal logic of the model to the external social conditions it reshapes. That move is not only moral, but psychological. It changes what is perceived, who is legible, and which consequences come into view.From system metrics to harm perceptionThe authors are asking for a shift in how we look at harm. Much of AI ethics has focused on terms like fairness or robustness - ideas that tend to be defined inside the system, based on what’s technically measurable. When ethical thinking starts there, it often stays close to the model and what gets missed are the wider consequences for the people on the receiving end.The rights framing starts from a different point. It begins with what people are entitled to like the ability to speak, to act, or to be included in decisions that affect them. Framing things this way brings attention back to the external context: who is affected, under what conditions, and with what constraints on their ability to respond.There’s also a cultural dimension: rights frameworks have been shaped through decades of debate, across legal systems and political movements, which makes them more than just a checklist of protections. They carry assumptions about whose claims are recognised, and on what terms. When AI systems developed in one cultural setting are used globally, those underlying assumptions become crucially important. While the rights lens doesn’t resolve the tension, it does help to make it visible.Key ideasWhat we pay attention to depends on how harm is framedWhen we evaluate systems by looking at whether they’re fair or transparent, we’re often relying on internal criteria - does the system follow its own rules, or meet a technical definition? But harm isn’t always visible from that vantage point. A human rights framing draws the lens outward, toward the people affected, and the conditions that shape their vulnerability. It shifts the question from what the system is doing to what it is enabling or making harder to contest.Being included changes how people experience fairnessThe paper points out that many AI systems are built without meaningful input from the people they affect. A rights-based approach treats that as more than a design flaw. It recognises that participation itself shapes how people judge legitimacy. When people are excluded from decision-making, they are more likely to see the system as arbitrary or imposed even if the outcomes look defensible on paper. It’s not just what ...
    Afficher plus Afficher moins
    21 min
  • The trouble with combining AI and psychology
    May 26 2025
    In their paper Combining Psychology with Artificial Intelligence: What could possibly go wrong?, cognitive scientists Iris van Rooij and Olivia Guest explore what happens when AI systems are treated as if they think like people. They examine how psychological research changes when these systems are used not just to mimic behaviour, but to explain it and what that shift reveals about the assumptions shaping both fields.Their argument matters because it’s becoming easy to assume that if a system talks, writes, or predicts like a person, it must understand like one too. This paper unpacks why that assumption is flawed, and what it reveals about the kinds of reasoning science is beginning to accept.Why the fusion of psychology and AI is epistemically dangerousThe fusion of psychology and AI, when approached without careful consideration, can disrupt our understanding of knowledge: how we formulate questions, construct theories, and determine what constitutes an explanation.The authors contend that this convergence can lead to errors that are more insidious than straightforward methodological mistakes. The core issue lies in how each field defines understanding and what types of outputs they consider as evidence. When these standards become blurred or diminished, distinguishing between a theory and a mere placeholder, or between a tool and the subject it is intended to study, becomes increasingly challenging.Psychology's research habits lower the bar for explanationPsychology, particularly in its mainstream experimental form, has been grappling with inherent structural weaknesses. The replication crisis is the most visible symptom, but there are deeper issues influencing research practices: * Hyperempiricism refers to the tendency to prioritise data collection and the identification of statistically significant effects, often at the expense of developing robust theories. The mere presence of an effect is often considered informative, even without an accompanying explanation.* Theory-light science describes a trend where researchers focus on how individuals perform specific tasks, without considering whether these tasks genuinely reflect broader cognitive capacities. The emphasis is on measurable outcomes rather than explanatory depth.* Statistical proceduralism reflects the field's inclination to address crises by implementing stricter protocols and enhancing statistical rigour, rather than pursuing conceptual reform. Practices such as pre-registration and replication enhance methodological rigour but fail to tackle fundamental questions about what constitutes a meaningful theory.These tendencies render the field vulnerable to what the authors term an "epistemic shortcut" - a shift in how knowledge claims are justified. Rather than developing and testing theoretical assumptions, researchers may start to treat system outputs as inherently explanatory. Consequently, if an AI system produces behaviour resembling human responses, it might be mistakenly viewed as a substitute for genuine cognition, even if the underlying mechanisms remain unexplored..AI imports assumptions that favour performance over understandingAI introduces its own assumptions that influence its approach to understanding, often rooted in engineering where success is measured by performance rather than explanation:* Makeism suggests that building something is key to understanding it. In practice, if an AI system replicates a behaviour, it's often assumed that the behaviour is explained. However, replication doesn't confirm the same underlying process.* Treating high-performing models as if they reveal the mechanisms behind behaviours is a common mistake. Even if a system performs well, it may not capture the essence of the phenomenon it mimics.* Performance metrics like benchmark results and predictive accuracy are frequently equated with scientific insight. High-scoring models are often deemed valid, even if their success is unclear or irrelevant to cognitive theory.* Hype cycles exacerbate these issues: commercial and reputational incentives encourage overstatement, making it easy to overlook constraints like computational intractability or multiple realisability, where different systems produce similar outputs differently.These factors foster a reasoning pattern where systems with superficially human-like behaviours are assumed to be cognitively equivalent to humans, often without examining the assumptions behind this equivalence.What goes wrong when these patterns reinforce each otherWhen psychology and AI are brought together without challenging these habits, their weaknesses can amplify each other. This can result in a number of epistemic errors:* Category errors, where AI systems are treated as if they are minds or cognitive agents.* Success-to-truth inferences, where good performance is taken as evidence that a system is cognitively plausible.* Theory-laundering, where the outputs of machine learning systems are framed as if they ...
    Afficher plus Afficher moins
    18 min
  • Ep. 10: AI as Normal Technology
    May 1 2025
    This episode explores the paper AI as Normal Technology by Arvind Narayanan and Sayash Kapoor, which challenges the idea of artificial intelligence as a superintelligent, transformative threat. Instead, the authors argue that AI should be understood as part of a long line of general-purpose technologies—more like electricity or the internet, less like an alien mind.Their core message is threefold: as description, prediction, and prescription. AI is currently a tool under human control, it will likely remain so, and we should approach its development through policies of resilience, not existential fear.Arvind Narayanan is a professor of computer science at Princeton University and director of the Center for Information Technology Policy. Sayash Kapoor is a Senior Fellow at Mozilla, a Laurance S. Rockefeller Fellow at the Princeton Center for Human Values, and a computer science PhD candidate at Princeton. Together they co-author AI Snake Oil, named one of Nature’s 10 best books of 2024, and a newsletter followed by 50,000 researchers, policymakers, journalists, and AI enthusiasts.This episode reflects on how their framing shifts the conversation away from utopian or dystopian extremes and toward the slower, more human work of integrating technologies into social, organisational, and political life.Companion notesKey ideas from Ep. X: AI as Normal TechnologyThis episode reflects on AI as Normal Technology by Arvind Narayanan and Sayash Kapoor, a paper arguing that AI should be seen as part of a long pattern of transformative but gradual technologies—not as an existential threat or superintelligent agent. Here are three key ideas that stand out:1. AI is a tool, not an alien intelligenceThe authors challenge the common framing of AI as a kind of autonomous mind.* Current AI systems are tools under human control, not independent agents.* Technological impact comes from how tools are used and integrated, not from some inherent “intelligence” inside the technology.* Predicting AI’s future as a runaway force overlooks how society, institutions, and policy shape technological outcomes.This framing invites us to ask who is using AI, how it is being used, and for what purposes—not just what the technology can do. It also reminds us that understanding the human side of AI systems—their users, contexts, and social effects—is as important as tracking technical performance.2. Progress will be gradual and messyThe speed of AI diffusion is shaped by more than technical capability.* Technological progress moves through invention, innovation, adoption, and diffusion—and each stage has its own pace.* Safety-critical domains like healthcare or criminal justice are slow by design, often constrained by regulation.* General benchmarks (like exam performance) tell us little about real-world impacts or readiness for professional tasks.This challenges the popular narrative of sudden, transformative change and helps temper predictions of mass automation or societal disruption. It also highlights the often-overlooked role of human, organisational, and cultural adaptation—the frictions, resistances, and recalibrations that shape how technologies actually land in the world.3. Focus on resilience, not speculative fearsThe paper argues for governance that centres on resilience, not control over hypothetical superintelligence.* Most risks—like accidents, misuse, or arms races—are familiar from past technologies and can be addressed with established tools.* Policies that improve adaptability, reduce uncertainty, and strengthen downstream safeguards matter more than model-level “alignment.”* Efforts to restrict or monopolise access to AI may paradoxically reduce resilience and harm safety innovation.This approach reframes AI policy as a governance challenge, not a science fiction problem and it implicitly points to the importance of understanding how humans and institutions build, maintain, and sometimes erode resilience over time.Narayanan and Kapoor’s work is a valuable provocation for anyone thinking about AI futures, policy, or ethics. It pushes the conversation back toward the social and political scaffolding around technology where, ultimately, its impacts are shaped.It’s a reminder that while much of the current conversation focuses on the capabilities and risks of the technology itself, we also need to pay attention to what’s happening on the human side: how people interpret, adopt, adapt to, and reshape these systems in practice. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Afficher plus Afficher moins
    29 min
  • Ep. 9: The Bias Loop
    Apr 27 2025

    This episode reflects on a 2024 Nature Human Behaviour article by Moshe Glickman and Tali Sharot, which investigates how interacting with AI systems can subtly alter human perception, emotion, and social judgement. Their research shows that when humans interact with even slightly biased AI, their own biases increase over time—and more so than when interacting with other people.

    This creates a feedback loop: humans train AI, and AI reshapes how humans see the world. The paper highlights a dynamic that often goes unnoticed in AI ethics or UX design conversations—how passive, everyday use of AI systems can gradually reinforce distorted norms of judgement.

    These reflections are especially relevant for AI developers, behavioural researchers, and policymakers thinking about how systems influence belief, bias, and social cognition over time.

    Source: Glickman, M., Sharot, T. How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat Hum Behav 9, 345–359 (2025). https://doi.org/10.1038/s41562-024-02077-2

    Key ideas from Ep. 9: The Bias Loop

    This episode reflects on a 2024 article in Nature Human Behaviour by Moshe Glickman and Tali Sharot, which explores how human–AI interactions create feedback loops that amplify human biases. The core finding: slightly biased AI doesn’t just reflect human judgement—it magnifies it. And when humans repeatedly engage with these systems, they often adopt those amplified biases as their own.

    Here are three things worth paying attention to:

    1. AI doesn't just mirror—it intensifies

    Interacting with AI can shift our perceptions more than interacting with people.

    * AI systems trained on slightly biased data tended to exaggerate that bias.

    * When people then used those systems, their own bias increased—sometimes substantially.

    * This happened across domains: perceptual tasks (e.g. emotion recognition), social categorisation, and even real-world image generation (e.g. AI-generated images of “financial managers”).

    Unlike human feedback, AI judgements feel consistent, precise, and authoritative—making them more persuasive, even when wrong.

    2. People underestimate AI’s influence

    Participants thought they were being more influenced by accurate AI—but biased AI shaped their thinking just as much.

    * Most participants didn’t realise how much the biased AI was nudging them.

    * Feedback labelled as coming from “AI” had a stronger influence than when labelled as “human,” even when the content was identical.

    * This suggests that perceived objectivity enhances influence—even when the output is flawed.

    Subtle framing cues (like labelling) matter more than we assume in shaping trust and uptake.

    3. Feedback loops are a design risk—and an opportunity

    Bias can accumulate over time. But so can accuracy.

    * Repeated exposure to biased AI increases human bias. But repeated exposure to accurate AI improved human judgement.

    * Small changes in training data, system defaults, or how outputs are framed can shift trajectories over time.

    * That means AI systems don’t just transmit information. They shape norms of perception and evaluation.

    Design choices that reduce error or clarify uncertainty won’t just improve individual outputs—they could reduce cumulative bias at scale.

    The study’s findings offer a clear behavioural mechanism for something often discussed in theory: how AI systems can influence society indirectly, through micro-shifts in user cognition. For developers, that means accounting not just for output accuracy, but for how people change through use. For behavioural scientists, it raises questions about how norms are formed in system-mediated environments. And for policy, it adds weight to the argument that user-facing AI isn’t just a content issue—it’s a cognitive one.

    Always curious how others are approaching these design risks. Until next time.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Afficher plus Afficher moins
    20 min