Couverture de Can AI Spot a Phish? LLMs vs. Email Scams

Can AI Spot a Phish? LLMs vs. Email Scams

Can AI Spot a Phish? LLMs vs. Email Scams

Écouter gratuitement

Voir les détails

À propos de ce contenu audio

Phishing emails are getting smarter—can AI keep up?

In this episode of Neuro Sec Ops, hosts Alex Carter and Maya Lin dive into a cutting-edge study that tested large language models like BERT, LLaMA, and Wizard against real phishing threats. From email deception to model explainability, we unpack why some models detect scams better than others—and why trust in AI isn’t just about accuracy.

We explore fine-tuning strategies, the ethics of explainable AI, and why even a smart model might still fall for a phish. Whether you're in cybersecurity or just curious about AI’s role in digital defense, this one's for you.

Topics Covered:

  • Phishing detection with LLMs

  • Explainability vs. accuracy

  • SHAP values and CC-SHAP scoring

  • The future of AI in cybersecurity

Subscribe for more deep dives at the intersection of AI and security.

Kuikel, S., Piplai, A., & Aggarwal, P. (2025). Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and

Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Aucun commentaire pour le moment