
OpenAI: Disrupting Malicious Uses of AI – June 2025
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de cette écoute
Summary of https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf
Report detailing OpenAI's efforts to identify and counter various abusive activities leveraging their AI models. It presents ten distinct case studies of disrupted operations, including deceptive employment schemes, covert influence operations, cyberattacks, and scams.
The report highlights how threat actors, often originating from China, Russia, Iran, Cambodia, and the Philippines, utilized AI for tasks ranging from generating social media content and deceptive resumes to developing malware and social engineering tactics.
OpenAI emphasizes that their use of AI to detect these activities has paradoxically increased visibility into malicious workflows, allowing for quicker disruption and sharing of insights with industry partners.
- OpenAI's mission is to ensure that artificial general intelligence (AGI) benefits all of humanity by deploying AI tools to solve difficult problems and defend against various abuses. This includes preventing AI use by authoritarian regimes, and combating covert influence operations (IO), child exploitation, scams, spam, and malicious cyber activity.
- OpenAI has successfully detected, disrupted, and exposed a range of abusive activities by leveraging AI as a force multiplier for their expert investigative teams. These malicious uses of AI include social engineering, cyber espionage, deceptive employment schemes (like the "IT Workers" case), covert influence operations (such as "Sneer Review," "High Five," "VAGue Focus," "Helgoland Bite," "Uncle Spam," and "STORM-2035"), cyber operations ("ScopeCreep," "Vixen," and "Keyhole Panda"), and scams (like "Wrong Number").
- These malicious operations originated from various global locations, demonstrating a widespread threatscape. Four of the ten cases in the report likely originated from China, spanning social engineering, covert influence operations, and cyber threats. Other disruptions involved activities from Cambodia (task scam), the Philippines (comment spamming), and covert influence attempts potentially linked with Russia and Iran. Additionally, deceptive employment schemes showed behaviors consistent with North Korea (DPRK)-linked activity.
- Threat actors utilized AI to evolve and scale their operations, yet this reliance also increased their exposure and aided in their disruption. For example, AI was used for automating resume creation, generating social media content, translating messages for social engineering, and developing malware. Paradoxically, this integration of AI into their workflows provided OpenAI with insights, enabling quicker identification and disruption of these threats.
- AI investigations are an evolving discipline, and ongoing disruptions help refine defenses and contribute to a broader understanding of the AI threatscape. OpenAI emphasizes that each disrupted operation improves their understanding of how threat actors abuse their models, allowing them to refine their defenses and share findings with industry peers and authorities to strengthen collective defenses across the internet.

Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !