Couverture de Optimising for Trouble – Game Theory and AI Safety | with Jobst Heitzig

Optimising for Trouble – Game Theory and AI Safety | with Jobst Heitzig

Optimising for Trouble – Game Theory and AI Safety | with Jobst Heitzig

Écouter gratuitement

Voir les détails

À propos de ce contenu audio

What happens when an AI system faithfully follows a flawed goal? In this episode, we explore how even well-designed algorithms can produce dangerous outcomes — from amplifying hate speech to mismanaging infrastructure — simply by optimising a reward function which, like all reward functions, fails to encode all that matters. We discuss the hidden risks of reinforcement learning, why over-optimisation can backfire, and how game theory helps us rethink what it means for AI to act "rationally" in complex, real-world environments.

Jobst Heitzig is a mathematician at the Potsdam Institute for Climate Impact Research and an expert in AI safety and decision design.

Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Aucun commentaire pour le moment