AI RMF Podcast 09 - NIST AI 100 - 2e2025 - Adversarial Machine Learning
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
National Institute of Standards and Technology AI 100-2e2025, Adversarial Machine Learning, examines the security risks posed by malicious actors who intentionally manipulate machine learning systems and outlines strategies to strengthen their resilience. The report explains how adversarial attacks can occur during different phases of the AI lifecycle, including data poisoning during training, model evasion through carefully crafted inputs, model extraction, and inference-time manipulation. It emphasizes that AI systems introduce new attack surfaces beyond traditional cybersecurity threats, requiring specialized risk assessment, testing, and monitoring approaches. The publication promotes secure-by-design principles, robust evaluation techniques, red-teaming, and continuous monitoring to detect and mitigate adversarial behaviors. Ultimately, NIST AI 100-2e2025 reinforces the need to integrate AI security into broader risk management and governance frameworks, ensuring machine learning systems remain reliable, trustworthy, and resilient in adversarial environments.
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !