AI/ML systems introduce an entirely new attack surface — from poisoned training data and model theft to insecure inference endpoints and privilege escalation inside ML pipelines.
In this episode of the AWS Expert Series Podcast, we take a CISO and Principal Engineer–level deep dive into securing AI/ML workloads on AWS. We explore identity boundaries in Amazon SageMaker, protecting data in S3-based feature stores, securing model artifacts, isolating training jobs, hardening inference endpoints, and implementing zero trust across the ML lifecycle.
If you’re building or governing AI on AWS, this episode goes beyond basics and into the real architectural and security decisions that matter.
Host is Pradeep Rao => https://www.linkedin.com/in/rao-pradeep/