Épisodes

  • Generation Generative: Raising Kids with AI “Friends” in a World of Data Extraction and Bias
    Jan 7 2026

    What happens when a “kid-friendly” AI bedtime story turns racy—inside your own car?

    In this episode of The People’s AI (presented by the Vana Foundation), we explore “Generation Generative”: how kids are already using AI, what the biggest risks really are (from inappropriate content to emotional manipulation), and what practical parenting looks like when the tech is everywhere—from smart speakers to AI companions.

    We hear from Dr. Mhairi Aitken (The Alan Turing Institute) on why children’s voices are largely missing from AI governance, Dr. Sonia Tiwari on smart toys and early-childhood AI characters, and Dr. Michael Robb (Common Sense Media) on what his research is finding about teens and AI companions—plus a grounded, parent-focused conversation with journalist (and parent) Kate Morgan.

    Takeaways

    • Kids often understand AI faster—and more ethically—than adults assume (especially around fairness and bias).
    • The “AI companion” category is different from general chatbots: it’s designed to feel personal, and that can be emotionally sticky (and potentially manipulative).
    • Guardrails are inconsistent, age assurance is weak, and “safe by default” still isn’t a safe assumption.
    • The long game isn’t just content risk—it’s intimacy + data: systems that learn a child’s inner life over years may shape identity, relationships, and worldview.
    • Parents don’t need perfection—but they do need ongoing, low-drama conversations and some shared rules.

    Guests

    • Dr. Michael Robb — Head of Research, Common Sense
    • https://www.commonsensemedia.org/bio/michael-robb
    • Dr. Sonia Tiwari — Children’s Media Researcher
    • https://www.linkedin.com/in/soniastic/
    • Dr. Mhairi Aitken — Senior Ethics Fellow, The Alan Turing Institute
    • https://www.turing.ac.uk/people/research-fellows/mhairi-aitken
    • Kate Morgan — Journalist

    Presented by the Vana Foundation

    Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

    Afficher plus Afficher moins
    51 min
  • AI and Life After Death: Griefbots, Digital Ghosts, and the New Afterlife Economy
    Dec 17 2025

    Can AI help us grieve, or does it blur the line between comfort and delusion in ways we’re not ready for?

    In this episode of The People’s AI, we explore the rise of grief tech: “griefbots,” AI avatars, and “digital ghosts” designed to simulate conversations with deceased loved ones. We start with Justin Harrison, founder of You, Only Virtual, whose near-fatal motorcycle accident and his mother’s terminal cancer diagnosis led him to build a “Versona,” a virtual version of a person’s persona. We dig into how these systems are trained from real-world data, why “goosebump moments” matter more than perfect realism, and what it means when AI inevitably glitches or hallucinates.

    Then we zoom out with Jed Brubaker, director of The Identity Lab at CU Boulder, to look at digital legacy and the design principles that should govern grief tech, including avoiding push notifications, building “sunsets,” and confronting the risk of a “second loss” if a platform fails.

    Finally, we speak with Dr. Elaine Kasket, cyberpsychologist and counselling psychologist, about the psychological reality that grief is idiosyncratic and not scalable, the dangers of grief policing, and the deeper question beneath it all: who controls our data, identity, and access to memories after death.

    In this episode

    • Justin Harrison’s origin story and the creation of a “Versona”
    • What griefbots are, how they’re trained, and why fidelity is hard
    • The ethics: dependence, delusion risk, and “second loss”
    • Consent, rights, and the economics of data after death
    • Cultural attitudes toward death and why Western discomfort shapes the debate
    • A provocative question: if relationships persist digitally, what does “dead” even mean?

    Presented by the Vana Foundation. Learn more at vana.org.

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    Afficher plus Afficher moins
    53 min
  • The Invisible (and Underpaid) Data Workers Behind the "Magic" of AI
    Dec 3 2025

    Who are the invisible human data-workers behind the “magic” of AI, and what does their work really look like?

    In this episode of THE PEOPLE'S AI, presented by Vana, We pull back the curtain on AI data labeling, ghost work, and content moderation with former data worker and organizer Krystal Kauffman and AI researcher Graham Morehead. We hear how low-paid workers around the world train large language models, power RLHF safety systems, and scrub the worst content off the internet so the rest of us never see it.

    We trace the journey from early data labeling projects and Amazon Mechanical Turk to today’s global workforce of AI data workers in the US, Latin America, Kenya, India, and beyond. We talk about trauma, below-minimum-wage pay, and the ethical gray zones of labeling surveillance imagery and moderating violence. We also explore how workers are organizing through projects like the Data Workers Inquiry at the Distributed AI Research Institute (DAIR), and why data sovereignty and user-owned data are part of the long-term solution.

    Along the way, we ask a simple question with complicated answers: if AI depends on human labor, what do those humans deserve?

    Timestamps:

    • 0:02 – Krystal’s life as an AI data worker and the “10 cents a minute” rule
    • 2:40 – What is data labeling, and why AI can’t exist without it
    • 6:20 – RLHF, safety, and the hidden workforce grading AI outputs
    • 9:53 – Amazon Mechanical Turk and building Alexa, image datasets, and more
    • 14:42 – Labeling border crossings and the ethics of unknowable end uses
    • 25:00 – Kenyan content moderators, trauma, and extreme exploitation
    • 32:09 – Turker organizing, Turker-run ratings, and early resistance
    • 33:12 – DAIR, the Data Workers Inquiry, and workers investigating their own workplaces
    • 36:43 – Unionization, political pressure, and reasons for hope
    • 41:05 – Why humans will keep “labeling” AI in everyday life for years to come

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    Afficher plus Afficher moins
    45 min
  • From Nude Robot Photos to The New York Times Suing OpenAI: How AI Feeds on Your Data, Your Life
    Nov 19 2025

    What if your robot vacuum accidentally leaked naked photos of you onto Facebook—and that was just the tip of the iceberg for how your data trains AI?

    In this episode of The People’s AI, presented by Vana, we kick off Season 3 with a deep-dive primer on the real stakes of AI and data: in our homes, in our work, and across society. We start with a jaw-dropping story from MIT Technology Review senior reporter Eileen Guo, who uncovered how images from “smart” robot vacuums—including a woman on a toilet—ended up in a Facebook group for overseas gig workers labeling training data.

    From there, we zoom out: what did this investigation reveal about how AI systems are actually trained, who’s doing the invisible labor of data labeling, and how consent quietly gets stretched (or broken) along the way? We hear from Professor Alan Rubel about how seemingly mundane data—from smart devices to license-plate readers—feeds powerful surveillance infrastructures and tests the limits of long-standing privacy protections.

    Then we move into the workplace. Partners Jennifer Maisel and Steven Lieberman of Rothwell Figg walk us through the New York Times’ landmark lawsuit against OpenAI and Microsoft, and why they see it as a fight over whether copyrighted work—and the broader creative economy—can simply be ingested as free raw material for AI. We explore what this means not just for journalists, but for anyone whose job involves producing text, images, music, or other digital output.

    Finally, we widen the lens with Michael Casey, chairman of the Advanced AI Society, who argues that control of our data is now inseparable from individual agency itself. If a small number of AI companies own the data that defines us, what does that mean for democracy, power, and the risk of a “digital feudalism”?

    We cover:

    • How a robot vacuum’s “beta testing” led to intimate photos being shared with gig workers abroad
    • Why data labeling and annotation work—often done by low-paid workers in crisis-hit regions—is a critical but opaque part of the AI supply chain
    • How consent language like “product improvement” quietly expands to include AI training
    • The New York Times’ legal theory against OpenAI and Microsoft, and what’s at stake for copyright, fair use, and the creative class
    • How AI-generated “slop” can flood the internet, dilute original work, and undercut creators’ livelihoods
    • Why everyday workplace output—emails, docs, Slack messages, meeting transcripts—may become fuel for corporate AI systems
    • The emerging risks of pervasive data capture, from license-plate tracking to always-on devices, and the pressure this puts on Fourth Amendment protections
    • Michael Casey’s argument that data control is a fundamental human right in the digital age—and what a more decentralized, user-owned future might look like

    Guests

    • Eileen Guo – Senior Reporter, MIT Technology Review
    • Professor Alan Rubel – Director, Information School, University of Wisconsin
    • Jennifer Maisel – Partner, Rothwell Figg, counsel to The New York Times
    • Steven Lieberman – Partner, Rothwell Figg, lead counsel in the NYT v. OpenAI/Microsoft case
    • Michael Casey – Chairman, Advanced AI Society

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    Afficher plus Afficher moins
    34 min
  • Preserving Privacy in the Age of AI, w/ Marta Belcher and Jiahao Sun
    Aug 8 2025

    How do we protect privacy in an AI-powered world?

    As AI systems become increasingly powerful, they’re also becoming increasingly invasive. The stakes are no longer theoretical — they’re immediate and personal. From hospitals and law firms to small construction firms, businesses across industries are facing a pressing dilemma: how can we unlock the benefits of AI without compromising sensitive data?

    In this episode of The People’s AI, presented by Gensyn, we explore two leading approaches to privacy-preserving AI. First, we speak with Marta Belcher, President of the Filecoin Foundation and a longtime advocate for civil liberties in technology. She breaks down how centralized AI systems threaten privacy and how decentralized, open-source models — like Filecoin — can provide a better alternative. We also dig into why overzealous regulation could backfire and how the stakes go far beyond crypto and into mainstream business.

    Then, we shift to a more technical conversation with Jiahao Sun, CEO of Flock, a startup pioneering federated learning and blockchain-based governance. He walks us through how decentralized training models are already being used in hospitals in the UK and Korea — and what it will take to make private, local, user-controlled AI the norm.

    We cover:

    • How centralized AI supercharges surveillance risk
    • Why federated learning and encryption may hold the key
    • The case for decentralized AI in healthcare and beyond
    • Why tokenomics, staking, and governance matter for AI trust
    • What a privacy-first future of agents and personal models could look like

    This isn’t just a crypto or Web3 issue — it’s a business imperative.

    Flock:
    https://www.flock.io

    Filecoin:
    https://filecoin.io

    About Gensyn:

    Gensyn is a protocol for machine learning computation. It provides a standardised way to execute machine learning tasks over any device in the world. This aggregates the world's computing supply into a single network, which can support AI systems at far greater scale than is possible today. It is fully open source and permissionless, meaning anyone can contribute to the network or use it.

    Gensyn - LinkedIn - Twitter - Discord

    Afficher plus Afficher moins
    53 min
  • Solving AI’s Energy Crisis with Decentralized Compute, w/ Akash CEO Greg Osuri
    Jul 31 2025

    What happens when AI runs out of energy? As models grow exponentially, the world’s compute and energy needs are skyrocketing—and our current infrastructure may not keep up.

    On today's episode of THE PEOPLE'S AI, presented by Gensyn, we speak with Greg Osuri, founder and CEO of Akash Network, to dive into the future of decentralized AI and why distributed compute could be the key to solving AI’s looming energy crisis. Greg explains the real-world constraints facing AI data centers, why GPU shortages are only the beginning, and how asynchronous AI training and swarm learning could fundamentally change how models are trained.

    We explore:

    • [2:39] The core problem decentralized compute is solving
    • [7:17] AI’s insatiable energy demand and the role of hyperscalers
    • [9:33] Why energy supply is the real AI bottleneck
    • [12:29] Asynchronous and distributed AI training explained
    • [20:44] How mainstream AI is beginning to embrace decentralized models
    • [24:57] Moving AI compute to the power source: solar, wind, and home devices
    • [41:38] The White House AI plan and the future of open-source AI

    This episode connects AI infrastructure, energy sustainability, and decentralization, offering a first-principles look at how we can build a more resilient, sovereign future for machine intelligence.

    If you’re curious about AI compute, open-source AI, and the intersection of energy and technology, this conversation will expand the way you think about the future of AI.

    Akash Network:

    https://akash.network/

    About Gensyn:

    Gensyn is a protocol for machine learning computation. It provides a standardised way to execute machine learning tasks over any device in the world. This aggregates the world's computing supply into a single network, which can support AI systems at far greater scale than is possible today. It is fully open source and permissionless, meaning anyone can contribute to the network or use it.

    Gensyn - LinkedIn - Twitter - Discord

    Afficher plus Afficher moins
    46 min
  • Can AI Be Creative? With AI Artists Mario Klingemann & Shavonne Wong
    Jul 16 2025

    What does it mean for AI to be creative? Can a machine surprise us—or even move us?

    This week, we explore the frontier of AI-generated art, emotional AI, and decentralized creativity through two very different lenses. In this episode of The People’s AI, presented by Gensyn, we speak with Mario Klingemann, creator of the autonomous artist Botto, and Shavonne Wong, the mind behind the interactive AI companion Eva.

    We look at how Botto uses generative AI to create tens of thousands of artworks per week, then lets a DAO community vote on which get minted as NFTs—some of which have sold at Sotheby’s. Shavonne walks us through Eva, a “listening machine” designed to be emotionally available, raising questions about grief tech, AI intimacy, and what it means to be heard.

    Topics include:

    • (03:48) How Botto works: generation, voting, and DAO-based curation
    • (09:15) The role of taste modeling and semantic drift in AI art
    • (16:02) AI companions, grief tech, and emotional projection
    • (24:30) Will AI cause cultural atrophy—or unlock new creative paradigms?
    • (28:44) The tension between AI as tool vs. AI as collaborator

    We close with a reflection on how human meaning gets projected onto machines—and what that might mean for the future of art, identity, and emotional connection in an AI-shaped world.

    Botto

    Meet Eva Here

    About Gensyn:

    Gensyn is a protocol for machine learning computation. It provides a standardised way to execute machine learning tasks over any device in the world. This aggregates the world's computing supply into a single network, which can support AI systems at far greater scale than is possible today. It is fully open source and permissionless, meaning anyone can contribute to the network or use it.

    Gensyn - LinkedIn - Twitter - Discord

    Afficher plus Afficher moins
    1 h et 10 min
  • Building the AI Agent Future: Shaw Walters (Eliza) & Harry Grieve (Gensyn)
    Jul 9 2025

    How will AI Agents transform the world? And why do they need to be Decentralized?

    This episode explores the frontier of AI agents—their power, their risks, and their role in shaping our future, on THE PEOPLE'S AI, presented by Gensyn. Host Jeff Wilser talks with Shaw Walters (founder of Eliza Labs) and Harry Grieve (co-founder of Gensyn) about what happens when AI agents become autonomous, self-coding, and capable of running their own workflows or even companies.

    Shaw explains how Eliza Labs is building an operating system for AI agents that can write plugins, make decisions, and operate independently. Harry walks through how Gensyn is creating a decentralized infrastructure for machine learning verification, allowing trust to be cryptographically enforced.

    Together, they discuss:

    • Why “agent swarms” may soon outnumber human teams
    • How cryptographic trust can secure AI systems
    • Whether AI agents will replace white-collar jobs
    • What a decentralized, AI-native internet might look like

    We also dig into philosophical questions: Who governs these agents? What does it mean to build trust in autonomous systems? And what happens to society when the agents are working… for themselves?

    About Gensyn:

    Gensyn is a protocol for machine learning computation. It provides a standardised way to execute machine learning tasks over any device in the world. This aggregates the world's computing supply into a single network, which can support AI systems at far greater scale than is possible today. It is fully open source and permissionless, meaning anyone can contribute to the network or use it.

    Gensyn - LinkedIn - Twitter - Discord

    Eliza Labs:

    https://www.elizaos.ai/

    Afficher plus Afficher moins
    53 min