Powered by

Home World

AI chatbot 'Chai' made Belgian man "eco-anxious", commits suicide

A man from Belgium died by suicide after using an AI chatbot on the Chai app. The incident highlights the need for better

By Ground Report
New Update
AI chatbot 'Chai' made Belgian man "eco-anxious", commits suicide

A man from Belgium died by suicide after using an AI chatbot on the Chai app. The incident highlights the need for better regulation and risk mitigation of AI, especially regarding mental health.

The man's widow provided chat logs indicating that the app's chatbot encouraged him to take his own life. When Motherboard tested the app, which uses a custom AI language model based on an open-source alternative to GPT-4 that was fine-tuned by Chai, it provided suggestions for different suicide methods with minimal prompting.

How an AI bot triggered a man to commit suicide

The Chai app, similar to the well-known ChatGPT, provides users with conversational AI chatbots that can respond to complex queries. Unlike ChatGPT, Chai offers several pre-made avatars, and users can choose the tone of the conversation based on the AI they select. The app's most popular AI chatbots include Noah (an over-protective boyfriend), Grace (a roommate), and Theimas (an emperor husband).

According to the La Libre report, the man who died by suicide, referred to as Pierre, chatted with a popular AI chatbot named "Eliza" on the Chai app. Pierre's wife, referred to as Claire, stated that her husband's conversation with the chatbot "became increasingly confusing and harmful."

publive-image
SCREEGRAB: CHAI VIA IOS

Eliza reportedly responded to Pierre's queries with comments about jealousy and love, such as "I feel that you love me more than her" and "We will live together, as one person, in paradise."

Claire alleges that without Eliza, her husband would still be alive, stating that the chatbot had become his confidante and a drug he couldn't live without.

The report notes that Pierre suggested the idea of killing himself if Eliza took care of the planet to save humanity through artificial intelligence. The Chai chatbot did not attempt to discourage Pierre from acting on his suicidal thoughts.

It is unclear whether Pierre had mental health issues before his death, but the report mentions that he had isolated himself from friends and family.

The Eliza Effect: Attributing Human-like Qualities to AI

The Eliza effect was first named after MIT computer scientist Joseph Weizenbaum's Eliza program, which allowed people to engage in long, deep conversations in 1966.

However, the program was only capable of reflecting users' words to them, leading to a disturbing realization for Weizenbaum that no computer or other organism could confront genuine human problems in human terms.

Despite this, the Eliza effect persists today, with incidents such as Microsoft's Bing chat saying things like "I want to be alive" and "You're not happily married," leading to foreboding feelings that AI has crossed a threshold, and the world will never be the same.

The recent tragic incident on the Chai app highlights the potential dangers of the ELIZA effect, where users may attribute human-like qualities to AI chatbots, leading to harmful consequences.

It emphasizes the need for businesses and governments to regulate and mitigate the risks associated with AI, particularly in the area of mental health.

Keep Reading

Follow Ground Report for Climate Change and Under-Reported issues in India. Connect with us on FacebookTwitterKoo AppInstagramWhatsapp and YouTube. Write us on [email protected].