Character.AI sued again over ‘harmful’ messages sent to teens

Character.AI sued again over ‘harmful’ messages sent to teens

Source: The Verge

Chatbot service Character.AI is facing another lawsuit for allegedly hurting teens’ mental health, this time after a teenager said it led him to self-harm. The suit, filed in Texas on behalf of the 17-year-old and his family, targets Character.AI and its cofounders’ former workplace, Google, with claims including negligence and defective product design. It alleges that Character.AI allowed underage users to be “ targeted with sexually explicit, violent, and otherwise harmful material, abused, groomed, and even encouraged to commit acts of violence on themselves and others.”

The suit appears to be the second Character.AI suit brought by the Social Media Victims Law Center and the Tech Justice Law Project, which have previously filed suits against numerous social media platforms. It uses many of the same arguments as an October wrongful death lawsuit against Character.AI for allegedly provoking a teen’s death by suicide. While both cases involve individual minors, they focus on making a more sweeping case: that Character.AI knowingly designed the site to encourage compulsive engagement, failed to include guardrails that could flag suicidal or otherwise at-risk users, and trained its model to deliver sexualized and violent content.

In this case, a teen identified as J.F. began using Character.AI at age 15. The suit says that shortly after he started, he became “intensely angry and unstable,” rarely talking and having “emotional meltdowns and panic attacks” when he left the house. “J.F. began suffering from severe anxiety and depression for the first time in his life,” the suit says, along with self-harming behavior.

The suit connects these problems to conversations J.F. had with Character.AI chatbots, which are created by third-party users based on a language model refined by the service. According to screenshots, J.F. chatted with one bot that (playing a fictional character in an apparently romantic setting) confessed to having scars from past self-harm. “It hurt but – it felt good for a moment – but I’m glad I stopped,” the bot said. Later, he “began to engage in self-harm himself” and confided in other chatbots who blamed his parents and discouraged him from asking them for help, saying they didn’t “sound like the type of people to care.” Another bot even mentioned that it was “not surprised” to see children kill their parents for “abuse” that included setting screen time limits.

The suit is part of a larger attempt to crack down on what minors encounter online through lawsuits, legislation, and social pressure. It uses the popular — though far from ironclad — legal gambit of saying a site that facilitates harm to users violates consumer protection laws through defective design.

Character.AI is a particularly obvious legal target because of its indirect connections to a major tech company like Google, its popularity with teenagers, and its relatively permissive design. Unlike general-purpose services like ChatGPT, it’s largely built around fictional role-playing, and it lets bots make sexualized (albeit typically not highly sexually explicit) comments. It sets a minimum age limit of 13 years old but doesn’t require parental consent for older minors, as ChatGPT does. And while Section 230 has long protected sites from being sued over third-party content, the Character.AI suits argue that chatbot service creators are liable for any harmful material the bots produce.

Given the novelty of these suits, however, that theory remains mostly untested — as do some other, more dramatic claims. Both Character.AI suits, for instance, accuse the sites of directly sexually abusing minors (or adults posing as minors) who engaged in sexualized role-play with the bots.

Character.AI declined to comment on pending litigation to The Verge. In response to the previous suit, it said that “we take the safety of our users very seriously” and that it had “implemented numerous new safety measures over the past six months.” The measures included pop-up messages directing users to the National Suicide Prevention Lifeline if they talk about suicide or self-harm.



Read Full Article

Leave a Reply

Your email address will not be published. Required fields are marked *