Épisodes

  • #1 - Thomas Hickey: AGI Takeoff, Existential Risk, Regulation Ideas
    Jan 17 2023

    (See timestamps below)
    Thomas Hickey is a Dutch student in both philosophy and artificial intelligence at Utrecht University. We talk about his bachelor thesis on A.G.I. or Artificial General Intelligence, on bottlenecks for recursive self-improvement. But also go into existential risk and what we can do about it.

    CONTACT INFO:
    Alex van der Meer
    email: safetransitionREMOVETHIS@gmail.com but remove capital letters, (for spam)

    SEE MORE OF ME:

    - Twitter:
    https://twitter.com/AlexvanderMeer5

    - YouTube:
    https://www.youtube.com/@safetransition9743

    EPISODE LINKS:

    Eliezer Yudkowsky, on why it is lethal not to have retries
    https://intelligence.org/2022/06/10/agi-ruin/

    OpenAI on their approach to alignment blog post
    https://openai.com/blog/our-approach-to-alignment-research/

    TIMESTAMPS:
    On some podcast players you should be able to click the timestamp to jump to that time.

    (00:00) - Introduction

    (03:16) - Recursive self-improvement, how long until superintelligence

    (10:50) - What can be learned in the digital realm

    (14:21) - How fast can it learn in the real world

    (18:34) - Can AGI become better than us?

    (22:54) - Complex enough environment to create superintelligence?

    (29:10) - Can AGI Thomas take over the world?

    (37:40) - Is superintelligence irrelevant for safety?

    (41:38) - Existential risk from AI?

    (48:09) - How to decrease the chance of a bad outcome?

    (49:08) - Regulations

    (53:19) - ChatGPT and the best current models

    (59:57) - Solution to the treacherous turn?  

    (1:05:01) - AGI becomes religious?

    (1:11:03) - Starting point of the intelligence explosion?

    (1:16:49) - OpenAI Alignment approach blog post

    (1:18:29) - Is Open source bad for safety?

    (1:24:49) - How to contact me

    Afficher plus Afficher moins
    1 h et 26 min