AI Hallucinations: ChatGPT Created a Fake Child Murderer
10 months ago
- #ChatGPT Hallucinations
- #AI Ethics
- #GDPR Compliance
- ChatGPT has been found to regularly provide false information about individuals, damaging reputations with fabricated accusations like corruption, child abuse, and even murder.
- OpenAI violates GDPR's data accuracy principle by allowing ChatGPT to produce defamatory outputs without effective correction mechanisms.
- AI hallucinations—where systems generate false information—can have severe real-world consequences, including lawsuits against OpenAI.
- A Norwegian user, Arve Hjalmar Holmen, was falsely accused by ChatGPT of murdering his children, with the chatbot mixing real personal details with fabricated claims.
- OpenAI's response includes disclaimers about potential inaccuracies, but legal experts argue this does not absolve them of GDPR obligations.
- Despite updates enabling internet searches for factual data, OpenAI still fails to ensure complete erasure of false information from its models.
- noyb has filed complaints in Norway, urging authorities to enforce GDPR compliance, delete defamatory outputs, and impose fines on OpenAI.