Couverture de Safe Transition Talks

Safe Transition Talks

Safe Transition Talks

De : Alex van der Meer
Écouter gratuitement

À propos de ce contenu audio

This podcast series is about our future with Artificial Intelligence, with most of the focus on A.G.I. or Artificial general intelligence. In this podcast series, I will create solo episodes to take you from 0 knowledge of AI to understanding the subject, and I will be talking to experts to mainly try to answer the question; What can we do now that will bring down the probability of a bad outcome with AI? Or, how to increase our chance of a Safe Transition to the machine intelligence era! If you want to help take action or share ideas, reach out to me! Contact info in episode description.Alex van der Meer
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • #1 - Thomas Hickey: AGI Takeoff, Existential Risk, Regulation Ideas
      Jan 17 2023

      (See timestamps below)
      Thomas Hickey is a Dutch student in both philosophy and artificial intelligence at Utrecht University. We talk about his bachelor thesis on A.G.I. or Artificial General Intelligence, on bottlenecks for recursive self-improvement. But also go into existential risk and what we can do about it.

      CONTACT INFO:
      Alex van der Meer
      email: safetransitionREMOVETHIS@gmail.com but remove capital letters, (for spam)

      SEE MORE OF ME:

      - Twitter:
      https://twitter.com/AlexvanderMeer5

      - YouTube:
      https://www.youtube.com/@safetransition9743

      EPISODE LINKS:

      Eliezer Yudkowsky, on why it is lethal not to have retries
      https://intelligence.org/2022/06/10/agi-ruin/

      OpenAI on their approach to alignment blog post
      https://openai.com/blog/our-approach-to-alignment-research/

      TIMESTAMPS:
      On some podcast players you should be able to click the timestamp to jump to that time.

      (00:00) - Introduction

      (03:16) - Recursive self-improvement, how long until superintelligence

      (10:50) - What can be learned in the digital realm

      (14:21) - How fast can it learn in the real world

      (18:34) - Can AGI become better than us?

      (22:54) - Complex enough environment to create superintelligence?

      (29:10) - Can AGI Thomas take over the world?

      (37:40) - Is superintelligence irrelevant for safety?

      (41:38) - Existential risk from AI?

      (48:09) - How to decrease the chance of a bad outcome?

      (49:08) - Regulations

      (53:19) - ChatGPT and the best current models

      (59:57) - Solution to the treacherous turn?  

      (1:05:01) - AGI becomes religious?

      (1:11:03) - Starting point of the intelligence explosion?

      (1:16:49) - OpenAI Alignment approach blog post

      (1:18:29) - Is Open source bad for safety?

      (1:24:49) - How to contact me

      Afficher plus Afficher moins
      1 h et 26 min
    Aucun commentaire pour le moment