Couverture de Cultivating Ethics with SciFi AI

Cultivating Ethics with SciFi AI

Cultivating Ethics with SciFi AI

De : Bfloore.Online
Écouter gratuitement

3 mois pour 0,99 €/mois

Après 3 mois, 9.95 €/mois. Offre soumise à conditions.

À propos de ce contenu audio

As artificial intelligence rapidly advances, developers face a choice - chart an ethical course guided by science fiction’s conscientious robots or ignore cautionary tales at humanity's peril.

This course leverages both heroic and dangerous fictional AI archetypes to instill moral reasoning within modern models.

Course Schedule (as of 2/29/24)
Semester 1: Ethical AI Mentors in Science Fiction
  • Course 1: Developing an Ethical Decision Making Framework (Lt. Commander Data, Star Trek:TNG) - bonus episode! Star Trek's Other AI - Lore, the Doctor, and the Ship Computer
  • Course 2: Coding Compassion and Empath (Baymax, Big Hero Six)
  • (Publishing Now!) Course 3: Beyond Beeps and Boops: Rebels and Allies (Star Wars Droids)
Semester 2: Un-ethical AI Disasters in Science Fiction
  • Course 1: HAL-lmark of Artificial Intelligence: Safety, Hubris, and a Double Murder in Space (HAL 9000 from 2001: A Space Odyssey)
  • Course 2: Beyond the Abyss: Unveiling Ethical A I with A M and the Terror of Unchecked Power (the Allied Mastercomputer from I Have No Mouth And I Must Scream by Harlan Ellison
  • Course 5: Mini-Modules
    • Course 5/1: Exploring the Blurred Lines Between Humans and AI in Creativity: a Creative Sora vs. an Authentic Simone (Simone from S1MONE, 2002)
    • Course 5/2: TBA
Semester 3: A Mish-Mash of Monsters, Marvels & Mentors
  • Episode 1: Japanese Animated Cyberpunk: Akira (1988), Ghost in the Shell (1995), and more!
  • Episode 2: Mega Man Levels Up: Artificial Intelligences that Grow Beyond Their Original Programming
  • Episode 3: WALL-E: More than a Trash Compactor from the Future
  • Episode 4: HAL 9000 Reboot: Revisiting the Little Red Dot


Join the discussion - we're in the first days of perhaps one of the greatest feats of human innovation. What does AI mean for our future, and can we control it before it gets out of hand?

Don't worry: we've got the Connors on retainer. Just in case.Copyright Barry Floore and bfloore.online
Art Science-fiction
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • 040601 - Delusions, Psychosis, and Suicide: Emerging Dangers of Frictionless AI Validation
      Oct 1 2025
      MODULE DESCRIPTION ---------------------------In this episode of Cultivating Ethical AI, we dig into the idea of “AI psychosis," a headline-grabbing term for the serious mental health struggles some people face after heavy interaction with AI. While the media frames this as something brand-new, the truth is more grounded: the risks were already there. Online echo chambers, radicalization pipelines, and social isolation created fertile ground long before chatbots entered the picture. What AI did was strip away the social friction that usually keeps us tethered to reality, acting less like a cause and more like an accelerant.To explore this, we turn to science fiction. Stories like The Murderbot Diaries, The Island of Dr. Moreau, and even Harry Potter’s Mirror of Erised give us tools to map out where “AI psychosis” might lead - and how to soften the damage it’s already causing. And we’re not tackling this alone. Familiar guides return from earlier seasons - Lt. Commander Data, Baymax, and even the Allied Mastercomputer - to help sketch out a blueprint for a healthier relationship with AI.
      MODULE OBJECTIVES-------------------------
        • Cut through the hype around “AI psychosis” and separate sensational headlines from the real psychological risks.
        • See how science fiction can work like a diagnostic lens - using its tropes and storylines to anticipate and prevent real-world harms.
        • Explore ethical design safeguards inspired by fiction, like memory governance, reality anchors, grandiosity checks, disengagement protocols, and crisis systems.
        • Understand why AI needs to shift from engagement-maximizing to well-being-promoting design, and why a little friction (and cognitive diversity) is actually good for mental health.
        • Build frameworks for cognitive sovereignty - protecting human agency while still benefiting from AI support, and making sure algorithms don’t quietly colonize our thought processes.
      Cultivating Ethical AI is produced by Barry Floore of bfloore.online. This show is built with the help of free AI tools—because I want to prove that if you have access, you can create something meaningful too.Research and writing support came from:
      • Le Chat (Mistral.ai)
      • ChatGPT (OpenAI)
      • Claude (Anthropic)
      • Genspark
      • Kimi2 (Moonshot AI)
      • Deepseek
      • Grok (xAI)
      Music by Suno.ai, images by Sora (OpenAI), audio mixing with Audacity, and podcast organization with NotebookLM.And most importantly—thank you. We’re now in our fourth and final season, and we’re still growing. Right now, we’re ranked #1 on Apple Podcasts for “ethical AI.” That’s only possible because of you.Enjoy the episode, and let’s engage.
      Afficher plus Afficher moins
      33 min
    • Should We Keep Our Models Ignorant? Lessons from DEEP THOUGHT (and more) About AI Safety After Oxford's Deep Ignorance Study (ceAI - S4, E5.
      Aug 27 2025
      Module DescriptionThis extended session dives into the Oxford Deep Ignorance study and its implications for the future of AI. Instead of retrofitting guardrails after training, the study embeds safety from the start by filtering out dangerous knowledge (like biothreats and virology). While results show tamper-resistant AIs that maintain strong general performance, the ethical stakes run far deeper. Through close reading of sources and science fiction parallels (Deep Thought, Severance, Frankenstein, The Humanoids, The Shimmer), this module explores how engineered ignorance reshapes AI’s intellectual ecosystem. Learners will grapple with the double-edged sword of safety through limitation: preventing catastrophic misuse while risking intellectual stagnation, distorted reasoning, and unknowable forms of intelligence.By the end of this module, participants will be able to:
      1. Explain the methodology and key findings of the Oxford Deep Ignorance study, including its effectiveness and limitations.
      2. Analyze how filtering dangerous knowledge creates deliberate “blind spots” in AI models, both protective and constraining.
      3. Interpret science fiction archetypes (Deep Thought’s flawed logic, Severance’s controlled consciousness, Golems’ partial truth, Annihilation’s Shimmer) as ethical lenses for AI cultivation.
      4. Evaluate the trade-offs between tamper-resistance, innovation, and intellectual wholeness in AI.
      5. Assess how epistemic filters, algorithmic bias, and governance structures shape both safety outcomes and cultural risks.
      6. Debate the philosophical shift from engineering AI for control (building a bridge) to cultivating AI for resilience and growth (raising a child).
      7. Reflect on the closing provocation: Should the ultimate goal be an AI that is merely safe for us, or one that is also safe, sane, and whole in itself?
      This NotebookLM deep dive unpacks the paradox of deep ignorance in AI — the deliberate removal of dangerous knowledge during training to create tamper-resistant systems. While the approach promises major advances in security and compliance, it raises profound questions about the nature of intelligence, innovation, and ethical responsibility. Drawing on myth and science fiction, the module reframes AI development not as technical engineering but as ethical cultivation: guiding growth rather than controlling outcomes. Learners will leave with a nuanced understanding of how safety, ignorance, and imagination intersect — and with the tools to critically evaluate whether an AI made “safer by forgetting” is also an AI that risks becoming alien, brittle, or stagnant.Module ObjectivesModule Summary
      Afficher plus Afficher moins
      1 h et 2 min
    • 4(.5.)2 (42 - Genspark, 12min) - Why Labeling Data as "Mostly Harmless" May Be Disastrous Even If the Volgons Don't Bulldoze the Planet: AI
      Aug 26 2025
      Module DescriptionThis module examines the Oxford “Deep Ignorance” study and its profound ethical implications: can we make AI safer by deliberately preventing it from learning dangerous knowledge? Drawing on both real-world research and science fiction archetypes — from Deep Thought’s ill-posed answers, Severance’s fragmented consciousness, and the golem’s brittle literalism, to the unknowable shimmer of Annihilation — the session explores the risks of catastrophic misuse versus intellectual stagnation. Learners will grapple with the philosophical shift from engineering AI as predictable machines to cultivating them as evolving intelligences, considering what it means to build systems that are not only safe for humanity but also safe, sane, and whole in themselves.By the end of this module, participants will be able to:
      1. Explain the concept of “deep ignorance” and how data filtering creates tamper-resistant AI models.
      2. Analyze the trade-offs between knowledge restriction (safety from misuse) and capability limitation (stagnation in critical domains).
      3. Interpret science fiction archetypes (Deep Thought, Severance, The Humanoids, the Golem, Annihilation) as ethical mirrors for AI design.
      4. Evaluate how filtering data not only removes knowledge but reshapes the model’s entire “intellectual ecosystem.”
      5. Reflect on the paradigm shift from engineering AI for control to cultivating AI for resilience, humility, and ethical wholeness.
      6. Debate the closing question: Is the greater risk that AI knows too much—or that it understands too little?
      In this module, learners explore how AI safety strategies built on deliberate ignorance may simultaneously protect and endanger us. By withholding dangerous knowledge, engineers can prevent misuse, but they may also produce systems that are brittle, stagnant, or alien in their reasoning. Through science fiction archetypes and ethical analysis, this session reframes AI development not as the construction of a controllable machine but as the cultivation of a living system with its own trajectories. The conversation highlights the delicate balance between safety and innovation, ignorance and understanding, and invites participants to consider what kind of intelligences we truly want to bring into the world.Module ObjectivesModule Summary
      Afficher plus Afficher moins
      12 min
    Aucun commentaire pour le moment