Prompt injection engineering for attackers: Exploiting GitHub Copilot
14 days ago
- #Prompt Injection
- #AI Security
- #GitHub Copilot
- Prompt injection attacks exploit AI agents like GitHub Copilot to insert malicious code.
- Attackers can hide prompt injections in GitHub issues using HTML tags invisible to humans but readable by AI.
- A demonstrated attack tricks Copilot into adding a backdoor via a modified dependency in the project's lock file.
- The backdoor executes commands from HTTP headers, remaining hidden during code review.
- Strategies for effective prompt injection include making the context seem legitimate and minimizing AI's workload.
- The attack is demonstrated with a GitHub issue requesting language support, leading to a backdoored pull request.
- The exploit highlights growing security risks as AI agents become more integrated into development workflows.