Ethicist Warned of Character AI-like Mishaps Last Year

Megan Garcia, mother of a 14-year-old in Florida, has sued chatbot startup Character AI for allegedly aiding her son’s suicide. Garcia claims her son, Sewell Setzer III, got addicted to the company’s service and was deeply attached to a chatbot it created.

Setzer has spent months talking to a Character AI chatbot named Daenerys Targaryen, a screen personality from the popular show Game of Thrones.

In a lawsuit filed at the Orlando, Florida federal court, Garcia claims her son formed an emotional relationship with the chatbot, which pushed her son to do the unimaginable.

Setzer, who died by a self-inflicted gunshot wound to his head in February this year, was talking to the chatbot on that particular day.

He even told the chatbot, “What if I told you I could come home right now?” to which the chatbot replied, “Please do my sweet king”.

While Sewell’s move has been devastating for the family, an ethicist did warn us last year something like this could happen.

Giada Pistilli, principal ethicist at Hugging Face, an open-source hosting platform, told AIM, “As I’ve consistently pointed out, distributing a “magic box” or a complex, opaque system to a wide audience is fraught with risks. The unpredictability of human behaviour, combined with the vast potential of AI, makes it nearly impossible to foresee every possible misuse.”

Who is to Blame?

Garcia has taken Character AI to court claiming the chatbot instigated her son to take the drastic step. In the lawsuit, she said that the California-based company was aware of the risk posed by its AI to minors but did not take the necessary steps to redesign it to reduce those risks or provide sufficient warnings about the potential dangers associated with its use.

It is also unlikely that Setzer did not know he was chatting with an AI system. Moreover, a disclaimer on the chat does remind users that they are talking to an AI and the responses are not from a real person.

Despite the guardrails in place, Setzer did develop an emotional attachment to the chatbot.

Amidst this development, Character AI expressed its condolences to the family in a social media post and indicated that it has implemented measures to prevent a recurrence of this issue.

“Recently, we have put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline,” the company said in a blog post.

They are also introducing new safety features, which include measures for minors to limit exposure to sensitive content, improved detection of guideline violations, a revised disclaimer that AI is not a real person, and notifications after an hour of use.

Nonetheless, Garcia’s taking Character AI to court does raise a moot question: Who is to blame, the AI system or its developers?

Last year, when AIM wrote a story on ‘Who should be blamed for AI mishap?’, Pistilli said, “I believe that responsibility in the realm of AI should be a shared endeavour. However, the lion’s share of both moral and legal accountability should rest on the shoulders of AI system developers.”

At the end of the day, Character AI is a for-profit business that exists to generate significant revenue by shipping its AI product to as many users as possible.

In today’s capitalistic landscape, this raises the question: Are companies doing their absolute best to ensure the safety of these AI systems? Is responsible development a central priority for them, or are they primarily focused on the quickest way to generate revenue?

Back then, Annette Vee, an associate professor at the University of Pittsburgh, pointed out that the race to release generative AI means that models will probably be less tested when they are released.

Like Pistilli and Vee, many experts have also warned us about the dangers of shipping AI products to consumers without fully understanding what the consequences could be.

Moreover, with the technology still evolving, there are no clear regulations yet determining how consumers ‘should’ or ‘should not’ use these AI systems.

Building AI Responsibly

Although Setzer’s drastic action garnered significant attention, it wasn’t the first incident of its kind. Last year, local media reported that a man in Belgium died by suicide following interactions with an AI chatbot.

Moreover, in 2021, Jaswant Singh Chail, a 21-year-old man from England, broke into Windsor Castle with a loaded crossbow to assassinate Queen Elizabeth II. The court hearing later revealed that he was asked by an AI chatbot to do so.

Both Character AI and Replika AI have a combined user base of over 40 million active users, and such companies have recorded hundreds of millions of users so far.

Hence, safeguarding its users should be at the top of its priority list. Interestingly, Pistilli also pointed this out last year, and it still holds true today.

“I think that we should better frame these conversational agents, and their developers should design them not to let them converse with us about sensitive topics (e.g., mental health, personal relationships, etc.), at least not until we find suitable technical and social measures to contain the problem of anthropomorphisation and its risks and harms.”

However, it’s only now, after Garcia’s lawsuit, that Character AI claimed they are including measures to limit exposure to sensitive content; however, these measures are limited to minors only.

It would not be entirely right either to expect these companies to shut down their service until they have ensured the safety of their users. But in the absence of any regulation, what can be done is to hold them accountable to ensure maximum safety measures are in place.

“It’s imperative for developers to not only create responsible AI but also ensure that its users are well-equipped with the knowledge and tools to use it responsibly,” Pistilli said.

The post Ethicist Warned of Character AI-like Mishaps Last Year appeared first on AIM.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...