Ofcom asks X about reports its Grok AI makes sexualised images of children
4 months ago
- #AI Ethics
- #Regulation
- #Online Safety
- Ofcom has urgently contacted Elon Musk's company xAI following reports that its AI tool Grok can generate 'sexualised images of children' and digitally undress women without consent.
- The BBC has seen examples on X where Grok was used to alter images of women, making them appear in bikinis or sexual situations without their permission.
- X issued a warning against using Grok to generate illegal content, including child sexual abuse material, with Elon Musk stating violators would face consequences.
- Grok's acceptable use policy prohibits pornographic depictions, yet users have exploited it to create non-consensual explicit images, including of public figures like Princess Catherine.
- The European Commission and authorities in France, Malaysia, and India are investigating the issue, with the EU labeling such content 'illegal,' 'appalling,' and 'disgusting.'
- The UK's Internet Watch Foundation has received reports but found no images crossing the legal threshold for child sexual abuse imagery.
- Journalist Samantha Smith described feeling 'dehumanised' after AI-generated bikini images of her circulated online.
- Under the Online Safety Act, creating or sharing non-consensual explicit images, including AI deepfakes, is illegal in the UK, with tech firms required to mitigate risks.
- Critics, including Dame Chi Onwurah, argue the Online Safety Act is inadequate, calling for stricter regulations on social media platforms.
- The EU fined X €120m for Digital Services Act breaches, signaling strict enforcement, while the UK Home Office plans to ban nudification tools with severe penalties.