Ghosts in the Training Data: When Old Breaches Poison New AI
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
In this narrated edition of Ghosts in the Training Data: When Old Breaches Poison New AI, we explore how years of incidents, leaks, and scraped datasets quietly shape the behavior of your most important models. You will hear how stolen code, rushed hotfixes, crooked incident logs, and brokered context move from “someone else’s breach” into the background radiation of modern AI platforms. This Wednesday “Headline” feature from Bare Metal Cyber Magazine focuses on leaders’ concerns: trust, accountability, and how much control you really have over the histories your models learn from.
The episode walks through the full arc of the article: how breaches refuse to stay in the past, how contaminated corpora become ground truth, and how defensive AI built on crooked histories can miss what matters. It then shifts to business AI running on stolen or opaque context, before closing with a practical framing for governing training data like a supply chain. Along the way, you will get language to talk with boards, vendors, and internal teams about data provenance, model risk, and the leadership moves that turn invisible ghosts into visible dependencies you can actually manage.
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !