Prompting Techniques for Secure Code Generation
a year ago
- #LLM
- #Prompting Techniques
- #Secure Code Generation
- Large Language Models (LLMs) are increasingly used in software development for generating code from natural language instructions.
- Concerns exist about the security of code generated by LLMs and the effectiveness of prompting techniques in ensuring secure code.
- The study investigates the impact of various prompting techniques on secure code generation by LLMs.
- A systematic literature review identified existing prompting techniques applicable to code generation tasks.
- The study evaluated a subset of these techniques on GPT-3, GPT-3.5, and GPT-4 models using a dataset of 150 security-relevant prompts.
- Key findings include a classification of prompting techniques for code generation and the effectiveness of Recursive Criticism and Improvement (RCI) in reducing security weaknesses.
- The research contributes insights into improving the security of LLM-generated code.