Couverture de AI Ethics Now

AI Ethics Now

AI Ethics Now

De : Tom Ritchie Jennie Mills IATL WIHEA University of Warwick
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

AI Ethics Now is a podcast dedicated to exploring the complex issues surrounding artificial intelligence from a non-specialist perspective, including bias, ethics, privacy, and accountability. Join us as we discuss the challenges and opportunities of AI and work towards a future where technology benefits society as a whole. This podcast was first developed by Dr Tom Ritchie and Dr Jennie Mills as part of The AI Revolution: Ethics, Technology, and Society module, taught as part of IATL at the University of Warwick.Tom Ritchie, Jennie Mills, IATL, WIHEA, University of Warwick
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • 10. AI and Dependence: Are We Misdiagnosing the Harms?
      Jan 4 2026

      Do you use ChatGPT or Claude daily for work? Mark Carrigan, Senior Lecturer in Education at Manchester Institute of Education, joins the podcast to discuss why we might be misdiagnosing the harms of generative AI. His research suggests the problems aren't inherent to the technology itself, but arise when AI systems meet the already broken bureaucracies of higher education and other sectors.

      Mark introduces the LLM Interaction Cycle, a framework he developed with philosopher of technology, Milan Stürmer, to understand how we engage with AI over time through three phases: positioning (how we assign roles to the AI), articulation (how we put our needs into words), and attunement (the sense that the AI understands us). He explains how use that begins as purely transactional often drifts toward something more affective as models build memory and context about us, and why this drift matters for how we think about ethical AI use.

      We go on to explore teacher agency in the age of generative AI, examining why fear of appearing ignorant prevents honest conversations between educators and students. Mark discusses three key risks facing universities:

      • lock-in (dependency on specific platforms),
      • loss of reflection (increasingly habitual rather than thoughtful use), and;
      • commercial capture (vendor interests shaping institutional practices).

      He argues that reflective use isn't just beneficial but ethically necessary, yet the pressures facing academics and students make reflection increasingly difficult.

      The conversation finishes by examining why universities in financial crisis are particularly vulnerable to both the promises and pitfalls of AI adoption, how institutional AI strategies risk creating new waves of disruption, and why understanding student realities (including significant paid work commitments) is essential to addressing concerns about AI in education. Mark concludes by making the case that we cannot understand the problems of generative AI without understanding the wider systemic crisis in higher education.

      This episode launches our new short series featuring conversations from the Building Bridges: A Symposium on Human-AI Interaction held at the University of Warwick on 21 November 2025. The symposium was organised by Dr Yanyan Li, Xianzhi Chen, and Kaiqi Yu, and jointly funded by the Institute of Advanced Study Conversations Scheme and the Doctoral College Networking Fund, with sponsorship from Warwick Students' Union.

      AI Ethics Now

      Exploring the ethical dilemmas of AI in Higher Education and beyond.

      A University of Warwick IATL Podcast

      This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

      This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

      Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

      We will discuss:

      • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
      • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
      • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

      If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

      Afficher plus Afficher moins
      35 min
    • 9. AI and Bias: How AI Shapes What We Buy
      Dec 15 2025

      As you search for Christmas gifts this season, have you asked ChatGPT or Gemini for recommendations? Katarina Mpofu and Jasmine Rienecker from Stupid Human join the podcast to discuss their groundbreaking research examining how AI systems influence public opinion and decision-making. Conducted in collaboration with the University of Oxford, their study analysed over 8,000 AI-generated responses to uncover systematic biases in how AI systems like ChatGPT and Gemini recommend brands, institutions, and governments.

      Their findings reveal that AI assistants aren't neutral—they have structured and persistent preferences that favour specific entities regardless of how questions are asked or who's asking. ChatGPT consistently recommended Nike for running shoes in over 90% of queries, whilst both models claimed the US has the best national healthcare system. These preferences extend beyond consumer products into government policy and educational institutions, raising critical questions about fairness, neutrality, and AI's role in shaping global narratives.

      We explore how AI assistants are more persuasive than human debaters, why users trust these systems as sources of truth without questioning their recommendations, and how geographic and cultural biases develop through training data, semantic associations, and user feedback amplification. Katarina and Jasmine explain why language matters - asking in English produces US-centric biases regardless of where you're located - and discuss the implications for smaller brands, niche markets, and diverse user groups systematically disadvantaged by current AI design.

      The conversation examines whether companies understand they're building these preferences into systems, the challenge of cross-domain bias contamination, and the urgent need for frameworks to identify and benchmark AI biases beyond protected characteristics like race and gender.

      AI Ethics Now

      Exploring the ethical dilemmas of AI in Higher Education and beyond.

      A University of Warwick IATL Podcast

      This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

      This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

      Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

      We will discuss:

      • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
      • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
      • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

      If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

      Afficher plus Afficher moins
      25 min
    • 8. AI and Decentralisation: Own AI or Be Owned By It
      Nov 30 2025

      In this episode, Max Sebti, co-founder and CEO of Score, challenges the centralised control of computer vision systems and makes the case for decentralised AI as a matter of public interest.

      Max brings experience from AI data annotation and model development, where he witnessed how closed systems collect and control vast amounts of visual data. Now at Score, running on the Bittensor network, he's building "open source computer vision" - systems that are publicly verifiable, permissionless, and collectively owned rather than corporately controlled.

      His central argument: we face a choice between "own AI or be owned by AI." As computer vision expands from sport into healthcare, insurance, and public surveillance, who controls these systems becomes existential. Max argues citizens should have access to model weights and training data as a democratic necessity.

      We explore what decentralisation means in practice: how Bittensor's incentive mechanisms unlock talent and data centralised systems can't access, why open source doesn't sacrifice performance, and the stark reality that camera systems are making decisions about you based on models you cannot see.

      Max introduces competing visions: a "Skynet" scenario where private entities own all visual data, versus a "solar punk" future of abundant energy and AGI where open AI serves collective benefit. The difference? Transparency, accountability, and public ownership.

      The conversation tackles thorny questions: where should boundaries exist in open systems? How do you prevent misuse whilst maintaining accessibility? Max admits his team hasn't solved this - decentralised AI means thousands of contributors with different values building toward the same goal.

      Max closes with a call to action: push for open source AI models where people can verify, query, and hold systems accountable. His vision moves AI from corporate product to public utility - not because it's idealistic, but because the alternative is too dangerous.

      AI Ethics Now

      Exploring the ethical dilemmas of AI in Higher Education and beyond.

      A University of Warwick IATL Podcast

      This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠IATL module ⁠"The AI Revolution: Ethics, Technology, and Society"⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at the University of Warwick. The AI Revolution module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.'

      This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience.

      Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts.

      We will discuss:

      • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability.
      • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity.
      • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity.

      If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.

      Afficher plus Afficher moins
      26 min
    Aucun commentaire pour le moment