Google identifies over 100k prompts used in distillation attacks
5 hours ago
- #Phishing
- #AI Security
- #Cyber Threats
- Threat actors increasingly integrate AI to accelerate the attack lifecycle, including reconnaissance, social engineering, and malware development.
- Model extraction attacks (distillation attacks) are on the rise, targeting proprietary AI models for intellectual property theft.
- Government-backed actors use large language models (LLMs) for technical research, targeting, and generating sophisticated phishing lures.
- New malware families like HONESTCUE experiment with AI to generate code for downloading and executing second-stage malware.
- Underground services like Xanthorox claim to be independent models but rely on jailbroken commercial APIs and open-source MCP servers.
- AI-augmented phishing enables hyper-personalized, culturally nuanced lures, erasing traditional phishing indicators like poor grammar.
- Threat actors misuse AI for coding and tooling development, including automating vulnerability analysis and malware creation.
- AI-enabled malware like COINBAIT uses modern web frameworks and legitimate cloud services to enhance phishing campaigns.
- Threat actors abuse public sharing features of AI services to host deceptive content, tricking users into executing malicious commands.
- Google employs proactive measures to disrupt malicious activity, including disabling assets and improving model safeguards.