Couverture de AI RMF Podcast 09 - NIST AI 100 - 2e2025 - Adversarial Machine Learning

AI RMF Podcast 09 - NIST AI 100 - 2e2025 - Adversarial Machine Learning

AI RMF Podcast 09 - NIST AI 100 - 2e2025 - Adversarial Machine Learning

Écouter gratuitement

Voir les détails

À propos de ce contenu audio

National Institute of Standards and Technology AI 100-2e2025, Adversarial Machine Learning, examines the security risks posed by malicious actors who intentionally manipulate machine learning systems and outlines strategies to strengthen their resilience. The report explains how adversarial attacks can occur during different phases of the AI lifecycle, including data poisoning during training, model evasion through carefully crafted inputs, model extraction, and inference-time manipulation. It emphasizes that AI systems introduce new attack surfaces beyond traditional cybersecurity threats, requiring specialized risk assessment, testing, and monitoring approaches. The publication promotes secure-by-design principles, robust evaluation techniques, red-teaming, and continuous monitoring to detect and mitigate adversarial behaviors. Ultimately, NIST AI 100-2e2025 reinforces the need to integrate AI security into broader risk management and governance frameworks, ensuring machine learning systems remain reliable, trustworthy, and resilient in adversarial environments.

Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Aucun commentaire pour le moment