Should We Keep Our Models Ignorant? Lessons from DEEP THOUGHT (and more) About AI Safety After Oxford's Deep Ignorance Study (ceAI - S4, E5.
Impossible d'ajouter des articles
Désolé, nous ne sommes pas en mesure d'ajouter l'article car votre panier est déjà plein.
Veuillez réessayer plus tard
Veuillez réessayer plus tard
Échec de l’élimination de la liste d'envies.
Veuillez réessayer plus tard
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
- Explain the methodology and key findings of the Oxford Deep Ignorance study, including its effectiveness and limitations.
- Analyze how filtering dangerous knowledge creates deliberate “blind spots” in AI models, both protective and constraining.
- Interpret science fiction archetypes (Deep Thought’s flawed logic, Severance’s controlled consciousness, Golems’ partial truth, Annihilation’s Shimmer) as ethical lenses for AI cultivation.
- Evaluate the trade-offs between tamper-resistance, innovation, and intellectual wholeness in AI.
- Assess how epistemic filters, algorithmic bias, and governance structures shape both safety outcomes and cultural risks.
- Debate the philosophical shift from engineering AI for control (building a bridge) to cultivating AI for resilience and growth (raising a child).
- Reflect on the closing provocation: Should the ultimate goal be an AI that is merely safe for us, or one that is also safe, sane, and whole in itself?
Aucun commentaire pour le moment