Lawsuit Filed Against Character.AI and Google Following Death of Teen Obsessively Engaged with Chatbots.
This week, Character.ai unveiled new security features.
A lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, as well as against Google, following the tragic death of a teenager. The complaint, filed by the mother of the young man, Megan Garcia, accuses the defendants of wrongful death, negligence, deceptive business practices, and product liability. The mother argues that the personalized AI chatbot platform was “unreasonably dangerous” and lacked adequate safeguards, despite being aimed at a child audience.
According to the details laid out in the lawsuit, 14-year-old Sewell Setzer III began using Character.AI last year, interacting with chatbots based on characters from the “Game of Thrones” series, such as Daenerys Targaryen. Setzer had constant conversations with these bots in the months leading up to his death on February 28, 2024, “mere seconds” after his last exchange with a chatbot. The allegations include that the site “anthropomorphizes” AI characters and that the chatbots on the platform provide “unlicensed psychotherapy.” Character.AI hosts mental health-focused chatbots, such as “Therapist” and “Are You Feeling Lonely,” with which Setzer interacted.
Garcia’s attorneys cite Shazeer, who mentioned in an interview that he left Google to start his own company because there was “too much brand risk in big companies to launch something fun” and that he wanted to “push the technology to the max,” this after the company decided not to release the Meena LLM model they had developed. Google acquired the leadership team of Character.AI in August.
The Character.AI platform includes hundreds of personalized chatbots, many of which are inspired by popular characters from TV shows, movies, and video games. A recent report noted that millions of young people, including teenagers, use the platform, interacting with bots that simulate a range of personas, from celebrities to therapists. Another study highlighted issues with chatbots that mimic real people without their consent, including one that impersonated a teenager who was murdered in 2006.
Given how chatbots like Character.AI generate responses based on user input, complex questions arise about user-generated content and the liability associated with it, which remain without clear answers. In response to this situation, Character.AI has announced several changes to its platform. Chelsea Harrison, the communications director, expressed in an email that they are devastated by the user's loss and offered their deepest condolences to the family.
Among the announced modifications are adjustments to models for minors aimed at reducing the likelihood of encountering sensitive or suggestive content; improvements in the detection and response to user inputs that violate Community Standards; a revised warning in each chat reminding users that AI is not a real person; and a notification to users after spending an hour on the platform.
“As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a message directing users to the National Suicide Prevention Lifeline, triggered by terms related to self-harm or suicidal thoughts,” commented Harrison. Google did not provide an immediate response to requests for comments on the situation.