Google Threat Intel Group AI Threat Tracker:Advances in Threat Actor AI Tool Use
15 days ago
- #AI Security
- #Malware
- #Cyber Threats
- GTIG identifies a shift in threat actor behavior: adversaries are now deploying novel AI-enabled malware in active operations, marking a new phase of AI abuse.
- First use of 'Just-in-Time' AI in malware identified, with malware families like PROMPTFLUX and PROMPTSTEAL using LLMs to dynamically generate malicious scripts and evade detection.
- Threat actors are using social engineering tactics to bypass AI safety guardrails, posing as students or researchers to exploit AI tools.
- The cyber crime marketplace for AI tools has matured in 2025, with multifunctional tools supporting phishing, malware development, and vulnerability research.
- State-sponsored actors from North Korea, Iran, and China continue to misuse AI tools like Gemini across all stages of the attack lifecycle.
- Experimental malware like PROMPTFLUX uses AI for self-modification to evade detection, indicating future trends in AI-augmented malware.
- APT28 (FROZENLAKE) uses PROMPTSTEAL malware to generate commands via LLMs, marking the first observed use of LLMs in live operations.
- Threat actors misuse AI tools for social engineering, deepfakes, and developing custom malware, enhancing their operational capabilities.
- Google is committed to developing AI responsibly, disrupting malicious activity, and improving model safeguards to prevent misuse.
- Google introduces the Secure AI Framework (SAIF) and tools like Big Sleep and CodeMender to enhance AI security and automatically find and patch vulnerabilities.