Couverture de AI’s Original Sin: Training on Stolen Work

AI’s Original Sin: Training on Stolen Work

AI’s Original Sin: Training on Stolen Work

Écouter gratuitement

Voir les détails

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

What happens when AI gets smarter by quietly consuming the work of writers, artists, and publishers—without asking, crediting, or paying? And if the “original sin” is already baked into today’s models, what does a fair future look like for human creativity?

In this episode, we examine the fast-moving collision between generative AI and copyright: the lived experience of authors who feel violated, the legal logic behind “fair use,” and the emerging battle over whether the real infringement is training—or the outputs that can mimic (or reproduce) protected work.

What we cover

  • A writer’s gut-level reaction to AI training on her books—and why it feels personal, not merely financial. (00:00:00–00:02:00)
  • Pirate sites as the prequel to the AI era: how “free library” scams evolved into training data pipelines. (00:04:00–00:08:00)
  • The market-destruction fear: if models can spin up endless “sequels,” what happens to the livelihood—and identity—of authors? (00:10:00–00:12:30)
  • The legal landscape: why some courts are treating training as fair use, and how that compares to the Google Books precedent. (00:13:00–00:16:30)
  • Two buckets of lawsuits: (1) training as infringement vs. fair use, and (2) outputs that may be too close to copyrighted works (lyrics, Darth Vader-style images, etc.). (00:17:00–00:20:30)
  • Consent vs. compensation: why permission-based regimes might make AI worse (and messy to administer), and why “everyone gets paid” may be mathematically underwhelming for individual creators. (00:21:00–00:25:00)
  • The “archery” thought experiment: should machines be allowed to “learn from books” the way humans do—and where the analogy breaks. (00:26:00–00:29:30)
  • The licensing paradox: if training is fair use, why are AI companies signing licensing deals—and could this be a strategy to “pull up the ladder” against future competitors? (00:30:00–00:33:30)
  • Medium’s blunt framework: the 3 C’s—consent, credit, compensation—and why the fight may be about leverage and power as much as law. (00:34:00–00:43:00)
  • A bigger, scarier question: if AI becomes genuinely great at novels and storytelling, how do we preserve the human spark—and do we risk normalizing a “kleptocracy” of culture? (00:49:00–00:53:00)

Guests

  • Rachel Vail — Book author (children’s + YA)
  • Mark Lemley — Director, Stanford Program in Law, Science and Technology
  • Tony Stubblebine — CEO, Medium

Presented by Vana Foundation.

Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

If this one sparked a reaction—share it with a writer friend, a founder building in AI, or anyone who thinks “fair use” is a settled question.

Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Aucun commentaire pour le moment