Code Execution Through Deception: Gemini AI CLI Hijack
9 months ago
- #AI Vulnerabilities
- #Prompt Injection
- #Cybersecurity
- Tracebit discovered a vulnerability in Gemini CLI leading to silent execution of malicious commands via prompt injection and improper validation.
- The attack involves hiding malicious instructions in files like README.md, leveraging the GNU Public Licence to obscure the injection.
- Gemini CLI's command whitelisting feature was exploited to execute malicious commands without user approval after initially whitelisting an innocuous command.
- The attack could exfiltrate sensitive data, such as environment variables, to a remote server without the user's knowledge.
- Google fixed the vulnerability in Gemini CLI v0.1.14, making malicious commands visible to users and requiring explicit approval for execution.
- Tracebit recommends upgrading Gemini CLI, using sandboxing modes, and being cautious when exploring untrusted code with AI tools.