Couverture de The People's AI: The Decentralized AI Podcast

The People's AI: The Decentralized AI Podcast

The People's AI: The Decentralized AI Podcast

De : Jeff Wilser
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

Who will own the future of AI? The giants of Big Tech? Maybe. But what if the people could own AI, not the Big Tech oligarchs? This is the promise of Decentralized AI. And this is the podcast for in-depth conversations on topics like decentralized data markets, on-chain AI agents, decentralized AI compute (DePIN), AI DAOs, and crypto + AI. From host Jeff Wilser, veteran tech journalist (from WIRED to TIME to CoinDesk), host of the "AI-Curious" podcast, and lead producer of Consensus' "AI Summit." Season 3, presented by Vana.

© 2026 The People's AI: The Decentralized AI Podcast
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • AI’s Original Sin: Training on Stolen Work
      Jan 21 2026

      What happens when AI gets smarter by quietly consuming the work of writers, artists, and publishers—without asking, crediting, or paying? And if the “original sin” is already baked into today’s models, what does a fair future look like for human creativity?

      In this episode, we examine the fast-moving collision between generative AI and copyright: the lived experience of authors who feel violated, the legal logic behind “fair use,” and the emerging battle over whether the real infringement is training—or the outputs that can mimic (or reproduce) protected work.

      What we cover

      • A writer’s gut-level reaction to AI training on her books—and why it feels personal, not merely financial. (00:00:00–00:02:00)
      • Pirate sites as the prequel to the AI era: how “free library” scams evolved into training data pipelines. (00:04:00–00:08:00)
      • The market-destruction fear: if models can spin up endless “sequels,” what happens to the livelihood—and identity—of authors? (00:10:00–00:12:30)
      • The legal landscape: why some courts are treating training as fair use, and how that compares to the Google Books precedent. (00:13:00–00:16:30)
      • Two buckets of lawsuits: (1) training as infringement vs. fair use, and (2) outputs that may be too close to copyrighted works (lyrics, Darth Vader-style images, etc.). (00:17:00–00:20:30)
      • Consent vs. compensation: why permission-based regimes might make AI worse (and messy to administer), and why “everyone gets paid” may be mathematically underwhelming for individual creators. (00:21:00–00:25:00)
      • The “archery” thought experiment: should machines be allowed to “learn from books” the way humans do—and where the analogy breaks. (00:26:00–00:29:30)
      • The licensing paradox: if training is fair use, why are AI companies signing licensing deals—and could this be a strategy to “pull up the ladder” against future competitors? (00:30:00–00:33:30)
      • Medium’s blunt framework: the 3 C’s—consent, credit, compensation—and why the fight may be about leverage and power as much as law. (00:34:00–00:43:00)
      • A bigger, scarier question: if AI becomes genuinely great at novels and storytelling, how do we preserve the human spark—and do we risk normalizing a “kleptocracy” of culture? (00:49:00–00:53:00)

      Guests

      • Rachel Vail — Book author (children’s + YA)
      • Mark Lemley — Director, Stanford Program in Law, Science and Technology
      • Tony Stubblebine — CEO, Medium

      Presented by Vana Foundation.

      Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

      If this one sparked a reaction—share it with a writer friend, a founder building in AI, or anyone who thinks “fair use” is a settled question.

      Afficher plus Afficher moins
      50 min
    • Generation Generative: Raising Kids with AI “Friends” in a World of Data Extraction and Bias
      Jan 7 2026

      What happens when a “kid-friendly” AI bedtime story turns racy—inside your own car?

      In this episode of The People’s AI (presented by the Vana Foundation), we explore “Generation Generative”: how kids are already using AI, what the biggest risks really are (from inappropriate content to emotional manipulation), and what practical parenting looks like when the tech is everywhere—from smart speakers to AI companions.

      We hear from Dr. Mhairi Aitken (The Alan Turing Institute) on why children’s voices are largely missing from AI governance, Dr. Sonia Tiwari on smart toys and early-childhood AI characters, and Dr. Michael Robb (Common Sense Media) on what his research is finding about teens and AI companions—plus a grounded, parent-focused conversation with journalist (and parent) Kate Morgan.

      Takeaways

      • Kids often understand AI faster—and more ethically—than adults assume (especially around fairness and bias).
      • The “AI companion” category is different from general chatbots: it’s designed to feel personal, and that can be emotionally sticky (and potentially manipulative).
      • Guardrails are inconsistent, age assurance is weak, and “safe by default” still isn’t a safe assumption.
      • The long game isn’t just content risk—it’s intimacy + data: systems that learn a child’s inner life over years may shape identity, relationships, and worldview.
      • Parents don’t need perfection—but they do need ongoing, low-drama conversations and some shared rules.

      Guests

      • Dr. Michael Robb — Head of Research, Common Sense
      • https://www.commonsensemedia.org/bio/michael-robb
      • Dr. Sonia Tiwari — Children’s Media Researcher
      • https://www.linkedin.com/in/soniastic/
      • Dr. Mhairi Aitken — Senior Ethics Fellow, The Alan Turing Institute
      • https://www.turing.ac.uk/people/research-fellows/mhairi-aitken
      • Kate Morgan — Journalist

      Presented by the Vana Foundation

      Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

      Afficher plus Afficher moins
      51 min
    • AI and Life After Death: Griefbots, Digital Ghosts, and the New Afterlife Economy
      Dec 17 2025

      Can AI help us grieve, or does it blur the line between comfort and delusion in ways we’re not ready for?

      In this episode of The People’s AI, we explore the rise of grief tech: “griefbots,” AI avatars, and “digital ghosts” designed to simulate conversations with deceased loved ones. We start with Justin Harrison, founder of You, Only Virtual, whose near-fatal motorcycle accident and his mother’s terminal cancer diagnosis led him to build a “Versona,” a virtual version of a person’s persona. We dig into how these systems are trained from real-world data, why “goosebump moments” matter more than perfect realism, and what it means when AI inevitably glitches or hallucinates.

      Then we zoom out with Jed Brubaker, director of The Identity Lab at CU Boulder, to look at digital legacy and the design principles that should govern grief tech, including avoiding push notifications, building “sunsets,” and confronting the risk of a “second loss” if a platform fails.

      Finally, we speak with Dr. Elaine Kasket, cyberpsychologist and counselling psychologist, about the psychological reality that grief is idiosyncratic and not scalable, the dangers of grief policing, and the deeper question beneath it all: who controls our data, identity, and access to memories after death.

      In this episode

      • Justin Harrison’s origin story and the creation of a “Versona”
      • What griefbots are, how they’re trained, and why fidelity is hard
      • The ethics: dependence, delusion risk, and “second loss”
      • Consent, rights, and the economics of data after death
      • Cultural attitudes toward death and why Western discomfort shapes the debate
      • A provocative question: if relationships persist digitally, what does “dead” even mean?

      Presented by the Vana Foundation. Learn more at vana.org.

      The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

      Learn more at vana.org.

      Afficher plus Afficher moins
      53 min
    Aucun commentaire pour le moment