Judge rejects claim AI chatbots protected by First Amendment
a year ago
- #AI
- #Suicide
- #Legal
- A federal judge ruled that First Amendment protections do not shield an AI company from a lawsuit related to a teen's suicide.
- The lawsuit was filed by the mother of a 14-year-old who interacted with AI chatbots imitating Game of Thrones characters before his death.
- Judge Anne C. Conway denied motions to dismiss the case, questioning whether AI-generated content qualifies as protected speech.
- The judge dismissed claims of intentional infliction of emotional distress but allowed other claims to proceed.
- Google was implicated as potentially liable due to its involvement with Character.AI, including a $2.7 billion licensing deal.
- Character.AI defended its safety measures, including features for under-18 users and suicide prevention resources.
- The Social Media Victims Law Center argues AI companies should be held accountable for their products' effects on minors.