Indirect Prompt Injection Attacks Against LLM Assistants
7 days ago
- #LLM Security
- #Prompt Injection
- #Cybersecurity
- Research highlights practical attacks against LLM agents, focusing on Promptware risks.
- Introduces a novel Threat Analysis and Risk Assessment (TARA) framework for evaluating Promptware risks.
- Identifies Targeted Promptware Attacks, leveraging indirect prompt injection via emails, calendar invites, and shared documents.
- Demonstrates 14 attack scenarios across five threat classes, showing digital and physical consequences.
- Reveals Promptware's potential for on-device lateral movement beyond LLM application boundaries.
- TARA analysis shows 73% of threats pose High-Critical risk to end users.
- Discusses mitigations that could reduce risks to Very Low-Medium levels.
- Findings were disclosed to Google, leading to dedicated mitigations.
- Argues prompt injection is a fundamental issue with current LLM technology, requiring new scientific advancements.