#002: Securing AI/ML Workloads on AWS
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
AI/ML systems introduce an entirely new attack surface — from poisoned training data and model theft to insecure inference endpoints and privilege escalation inside ML pipelines.
In this episode of the AWS Expert Series Podcast, we take a CISO and Principal Engineer–level deep dive into securing AI/ML workloads on AWS. We explore identity boundaries in Amazon SageMaker, protecting data in S3-based feature stores, securing model artifacts, isolating training jobs, hardening inference endpoints, and implementing zero trust across the ML lifecycle.
If you’re building or governing AI on AWS, this episode goes beyond basics and into the real architectural and security decisions that matter.
Host is Pradeep Rao => https://www.linkedin.com/in/rao-pradeep/