AI hallucinations lead to a new cyber threat: Slopsquatting
a year ago
- #AI
- #SupplyChainAttack
- #Cybersecurity
- Researchers warn of 'Slopsquatting', a new supply chain attack exploiting AI-generated fake package recommendations.
- AI models like GPT-4, CodeLlama, and DeepSeek hallucinate non-existent packages, with 19.7% of recommendations being fake.
- Open-source AI models hallucinate more frequently (21.7%) compared to commercial ones (5.2%).
- Threat actors can register these hallucinated package names to distribute malicious code, posing widespread risks.
- Hallucinated packages are persistent (43% reappeared in tests) and semantically convincing (38% similar to real packages).
- Developers advised to use dependency scanners and avoid rushing security testing to mitigate risks.