Why LLM-Generated Passwords Are Dangerously Insecure
2 days ago
- #password security
- #artificial intelligence
- #cybersecurity
- LLM-generated passwords appear strong but are fundamentally insecure due to predictable token sampling.
- LLMs produce passwords with biased character distributions, repeated patterns, and lower-than-expected entropy.
- Even state-of-the-art models like GPT, Claude, and Gemini generate weak passwords with recurring structures.
- Coding agents sometimes default to LLM-generated passwords instead of using secure random generation methods.
- LLM-generated passwords have significantly reduced entropy, making them vulnerable to brute-force attacks.
- These passwords are found in real-world code and configurations, posing security risks.
- Security recommendations include avoiding LLM-generated passwords and using CSPRNGs or password managers.
- Temperature adjustments do not sufficiently improve password randomness.
- Attackers could prioritize LLM-generated passwords in brute-force attempts, exploiting their low entropy.