Couverture de Consistently Candid

Consistently Candid

Consistently Candid

De : Sarah Hastings-Woodhouse
Écouter gratuitement

À propos de ce contenu audio

AI safety, philosophy and other things.© 2025 Consistently Candid Philosophie Sciences sociales
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • #20 Frances Lorenz on the emotional side of AI x-risk, being a woman in a male-dominated online space & more
      May 14 2025

      In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more!

      Follow Frances on Twitter

      Subscribe to her Substack

      Apply for EAG London!

      Afficher plus Afficher moins
      52 min
    • #19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk
      Apr 13 2025

      Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI.

      We discussed why AI poses an existential risk to humanity, what makes this problem very hard to solve, why Gabe believes we need to prevent the development of superintelligence for at least the next two decades, and more.

      Follow Gabe on Twitter

      Read The Compendium and A Narrow Path





      Afficher plus Afficher moins
      1 h et 37 min
    • #18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
      Mar 2 2025

      A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year!

      We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more!

      Follow Nathan on Twitter

      Listen to The Cognitive Revolution

      My Twitter & Substack

      Afficher plus Afficher moins
      1 h et 46 min
    Aucun commentaire pour le moment