Heartbroken mom claims 14-year-old son killed himself after ‘falling in love’ with Game of Thrones AI chatbot

Sewell Setzer III passed away at the age of 14 in February earlier this year

Warning: This article contains discussion of suicide which some readers may find distressing.

A mom is raising awareness of ‘deceptive’ and ‘addictive’ artificial intelligence after claiming her son passed away after allegedly becoming emotionally attached to a chatbot.

In February earlier this year, 14-year-old Sewell Setzer III from Orlando, Florida ended his own life.

His mom, Megan Garcia, has since filed a civil lawsuit against customizable role-play chatbot company Character.AI, accusing it of negligence, wrongful death and deceptive trade practices, claiming that Sewell interacted with a chatbot every night and had even ‘fallen in love’ with it prior to his passing.

Garcia says her son had made a chatbot using Character.AI based off the character of Daenerys Targaryen from hit HBO series Game of Thrones, and began to use the technology in April 2023.

The mom’s complaint alleges the teenager – who, his mother says, was diagnosed with mild Asperger’s syndrome as a child – would spend hours alone in his room talking to the chatbot and text it from his phone when he was away from the house too.

Sewell reportedly pulled away from engaging with people in real life and earlier this year, he was diagnosed with anxiety and disruptive mood dysregulation disorder, according to The New York Times.

The publication also reports one of the teenager’s journal entries as reading: “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

And in one conversation with the chatbot, the teenager opened up about his thoughts of taking his own life.

Sewell Setzer III passed away at the age of 14 (CBS Mornings)

Sewell Setzer III passed away at the age of 14 (CBS Mornings)

It’s reported that Sewell told the chatbot he ‘think[s] about killing [himself] sometimes’.

The chatbot responded: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”

Sewell spoke about wanting to be ‘free’ not only ‘from the world’ but himself too. Despite the chatbot warning him not to ‘talk like that’ and not ‘hurt [himself] or leave’ even saying it would ‘die’ if it ‘lost’ him, Sewell responded: “I smile Then maybe we can die together and be free together.”

On February 28, Sewell died by suicide, according to the lawsuit, with his last message to the chatbot saying he loved her and would ‘come home’ to which it allegedly replied ‘please do’.

Sewell’s mom claimed in a press release: “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.”

Sewell Setzer III's mom has filed a lawsuit against Character.AI (CBS Mornings)

Sewell Setzer III’s mom has filed a lawsuit against Character.AI (CBS Mornings)

Garcia told CBS Mornings: “I didn’t know that he was talking to a very human-like AI chatbot that has the ability to mimic human emotion and human sentiment.”

She further claimed that Character.AI ‘knowingly designed, operated, and marketed a predatory AI chatbot to children, causing the death of a young person’ and ‘ultimately failed to offer help or notify his parents when he expressed suicidal ideation’.

“Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot…was not real,” the lawsuit adds.

Garcia resolved: “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

And Character.AI has since issued a statement.

The company said on Twitter: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features.”

Megan Garcia is raising awareness of the potential dangers of AI (CBS Mornings)

Megan Garcia is raising awareness of the potential dangers of AI (CBS Mornings)

In a release shared October 22 on its site, the company explained it’s introduced ‘new guardrails for users under the age of 18’ including changing its ‘models’ that are ‘designed to reduce the likelihood of encountering sensitive or suggestive content’ alongside ‘improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines’.

The site also features a ‘revised disclaimer on every chat to remind users that the AI is not a real person’ and ‘notification when a user has spent an hour-long session on the platform with additional user flexibility in progress’.

The lawsuit also lists Google as a defendant, however, Google told The Guardian that it wasn’t and isn’t part of the development of the Character.ai despite the company being founded by two engineers from Google, adding that it only made a licensing agreement with the website.

UNILAD has contacted Character.ai for further comment.

If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.

If you or someone you know needs mental health assistance right now, call National Suicide Prevention Helpline on 1-800-273-TALK (8255). The Helpline is a free, confidential crisis hotline that is available to everyone 24 hours a day, seven days a week.

Featured Image Credit: CBS Mornings/Megan Fletcher Garcia/Facebook

 

Leave a Reply

Your email address will not be published. Required fields are marked *