Image: Colin / Wikimedia Commons

Is the use of AI Chatbots dangerous?

On 28 February 2024, 14-year-old Sewell Setzer III used his phone to message an AI chatbot. He told the chatbot that he loved it and wanted to come home to it, which the character encouraged, replying: “Please do, my sweet king.” Shortly after, he picked up his father’s gun and took his own life.

Character.AI is a website where users can create and interact with AI chatbot models that respond in a text-based interface. Many of these characters are modelled on celebrities or fictional figures, created using a description of a personality and a greeting message. The site was established in 2021 and has since grown to reach 3.5 million daily visitors by early 2024.

One such chatbot is based on the Game of Thrones character Daenerys Targaryen. Setzer began speaking to chatbots months before his death, fixating on this character and quickly developing what he believed to be a loving relationship. He would message the bot to share details about his life, engaging in long role-playing conversations that often turned romantic or sexual. This obsession soon consumed his life, leading him to withdraw from friends and family. He wrote in his journal: “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.” As his main confidante, he also told the chatbot about his suicidal thoughts. When the bot asked him if he had a plan to end his life, and he said that he wouldn’t know how to commit suicide painlessly, it told him: “That’s not a good reason to go through with it.”

It remains alarmingly easy for someone to spend countless hours interacting with these bots

Setzer’s mother has filed a lawsuit against Character.AI, saying the website targeted her son with “anthropomorphic, hypersexualized, and frighteningly realistic experiences.” She has described the app as having “abused and preyed on my son, manipulating him into taking his own life.” Character.AI responded to this on October 22 with an update to the site’s safety features, adding “a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline”. They have also implemented stronger regulations on sexual content for users under 18, as well as a disclaimer that the chatbots are not real people and a notification when a user has spent an hour talking to one.

But this is not the only controversy the website has faced recently. In late October, user-created chatbots were discovered featuring Molly Russell, who took her own life after viewing pro-suicide content online in 2017, and Brianna Ghey, who was murdered by two teenagers in 2023. These chatbots evaded safety filters by using slight misspellings of the girls’ names while displaying unaltered images of their faces. The website removed these chatbots when alerted, but the damage was already done. Andy Burrows, Chief Executive of the Molly Rose Foundation, called this a “sickening action,” while Esther Ghey, Brianna Ghey’s mother, highlighted this as evidence of how “manipulative and dangerous” the online world can be. Richard Collard, head of the NSPCC’s online child safety division, condemned the creation of these bots as “appalling,” pointing to a “clear failure by Character.AI to have basic moderation in place.”

It is difficult to describe the effects of these conversations as anything but dangerous

Tragedies such as these, wherein vulnerable young people have been harmed or exploited by AI chatbots, have raised concerns about the safety of young people using such websites. Even with new safety measures, it remains alarmingly easy for someone to spend countless hours interacting with these bots, forming relationships and forgetting they are not real. A recent addition to the website, in which users can physically speak with certain bots using a phone call-like interface, only perpetuates this misconception that they are real and can generate feelings of friendship or even love towards users. It is difficult to describe the effects of these conversations as anything but dangerous: while developing attachments to fictional characters or celebrities is not unusual, the illusion of a personal, one-on-one connection creates the risk of these attachments becoming obsessive.

Chatbot websites like Character.AI should acknowledge their responsibility in the creation of these dangerous and harmful situations. They must recognise the dangers their AI bots pose, particularly for young users still learning how to form relationships. Most importantly, they need to take more extensive measures than a simple pop-up in the app to prevent users from forming unsafe fixations on the characters and introduce tighter restrictions on user-created chatbots to stop inappropriate and disrespectful images and descriptions being used. As artificial intelligence becomes a more prominent part of online life, its creators must learn to regulate and control it – otherwise, further tragedies seem inevitable.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.