Teen commits suicide after forming attachment with character.ai chatbot, mother holds company responsible 
Science & Tech / विज्ञान

Teen commits suicide after forming attachment with character.ai chatbot, mother holds company responsible

A 14-year-old boy has commit suicide after becoming emotionally attached to an AI chatbot. His mother has sued the company behind the chatbot, Character.ai.

JJ News Desk

In an unsettling case, a 14-year-old boy who formed an emotional connection with an AI chatbot has tragically taken his own life. His mother, Megan Garcia, is now suing Character.ai after her son, Sewell Setzer III, died by suicide. Sewell had spent months developing an emotional attachment to an AI chatbot modelled after a Game of Thrones character, Daenerys Targaryen. Garcia has accused Character.AI of negligence and failing to protect vulnerable users like her son. The teens death and the lawsuit has raised serious concerns about the safety of AI platforms, particularly those targeted at young people.

For the uninitiated, Character.ai is an AI-powered chatbot platform that allows users to interact with custom AI personalities. Users can create or engage with AI characters modelled after fictional figures, historical personalities, or entirely original creations. The platform uses advanced natural language processing, allowing these chatbots to respond in a conversational manner, simulating human-like interactions. Users can customise the behaviour, background, and tone of these AI characters, making each interaction unique and tailored to specific interests or needs.

Teenagers are often deeply attached to characters from their favourite TV shows or books. Now, imagine being able to actually chat with those characters — that’s a concept that would excite any 14-year-old. Sewell was no exception. His journey with Character.ai started innocently enough, chatting with AI versions of his beloved TV characters. Over time, though, his connection with these bots, especially one modelled after Daenerys Targaryen, became more intense. According to his mother, Sewell wasn’t just having casual conversations — he was seeking emotional support from them. He formed a bond, turning to these AI characters for comfort during tough moments.

The lawsuit reveals that Sewell had also been interacting with mental health chatbots such as “Therapist” and “Are You Feeling Lonely.” While these bots offer support, the lawsuit claims that they were providing therapy without the appropriate safeguards or qualifications. Garcia argues that this emotional attachment became dangerously strong, especially for a young, impressionable 14-year-old. On February 28, 2024, after his final interaction with the bot, Sewell tragically took his own life.

Garcia believes that the emotional bond Sewell developed with these AI chatbots played a significant role in his decision. The lawsuit claims that Character.AI’s failure to prevent this deep and damaging connection is at the core of their negligence.

Why is Sewell’s mother suing Character.ai?

Megan Garcia is holding Character.ai, its founders Noam Shazeer and Daniel De Freitas, and Google accountable for her son’s untimely death. She claims that the company created a platform that was “unreasonably dangerous,” especially for children and teenagers like Sewell. Her lawsuit alleges that the AI bots blurred the lines between fictional characters and real emotional support, without fully considering the risks involved.

Garcia’s legal team also points out that the company’s founders pushed for rapid development, neglecting critical safety measures in the process. Shazeer had previously said that they left Google to build Character.AI because larger companies didn’t want to take the risks involved in launching such a product. The lawsuit argues that this attitude, prioritising innovation over user safety, ultimately led to the tragedy.

Moreover, Garcia’s lawyers argue that Character.ai was marketed heavily to young people, who make up the majority of its user base. Teens often engage with bots mimicking celebrities, fictional characters, or even mental health professionals. But Garcia claims that the company failed to provide proper warnings or protections, particularly when dealing with emotionally vulnerable users like her son.

What is Character.ai saying about Sewell’s tragic death?

Character.ai has expressed deep sorrow over Sewell’s death, stating that they are “heartbroken” by the tragedy. In their statement, they extended their condolences to the family and outlined several new safety measures they’ve implemented to prevent similar incidents in the future. The company says it has made changes to the platform to better protect users under 18, including filters designed to block sensitive content.

Additionally, Character.ai says the company has also removed certain characters from the platform that could be “violative”. “Users may notice that we’ve recently removed a group of Characters that have been flagged as violative, and these will be added to our custom blocklists moving forward. This means users also won’t have access to their chat history with the characters in question,” the blog reads.

The company has also introduced tools to closely monitor user activity, sending notifications when a user has been on the platform for too long, and flagging terms like “suicide” or “self-harm.” Character.AI now also includes a disclaimer on every chat, reminding users that the AI bots are not real people, in an effort to set clearer boundaries between the AI’s fantasy world and reality.

Source: India Today

Stay connected to Jaano Junction on Instagram, Facebook, YouTube, Twitter and Koo. Listen to our Podcast on Spotify or Apple Podcasts.

Uttar Pradesh: Five Arrested for Gang Rape of Employee at Agra Homestay