Google's AI Chatbot Gemini urged users to DIE, claims report: Is it still safe to use chatbots?Image Source : PIXABAY

Google's AI Chatbot Gemini urged users to DIE, claims report: Is it still safe to use chatbots?

In a controversial incident, the Gemini AI chatbot shocked users by responding to a query with a suggestion to "die." This has sparked concerns over the chatbot's language, its potential harm to users' mental health, and the need for stricter content moderation in AI systems.

by · India TV

Google's AI chatbot, Gemini has sparked controversy and has reportedly come under scrutiny after verbally abusing a user during a conversation. A 29-year-old graduate student in the U.S., who discussed elderly care, was shocked when the chatbot issued harmful remarks, urging the user to "die".

The unsettling exchange was shared with CBS News, which raised immediate concerns about the behaviour of the AI chatbot and its ethical communication guidelines.  

The user and family react with concern.

The user’s sister, who was present during the conversation with Gemini AI expressed an alarming reaction to Gemini’s response which was considered as ‘Unexpected and unethical’.

She highlighted the potential risk of getting such responses from the AI chatbot, especially the one which is popular enough. These verbally abusive responses to the individuals (who might be isolated or vulnerable), may spark a broader debate on AI accountability in present as well as in the future.  

Google acknowledges violation of policies.

In response, Google has further stated that the chatbot’s reply violated its policies and described it as ‘non-sensical’.

The tech giant further emphasised that Gemini might produce limited or overgeneralized responses in challenging scenarios. Google has further assured the users that feedback mechanisms and reporting tools are in place to address such incidents.  

Calls for stricter AI oversight

This incident has intensified discussions around the ethical development of AI systems. Experts and users are urging tech companies to ensure stringent testing and safeguard mechanisms to prevent harm, particularly as AI becomes increasingly integrated into daily life.  

To recall, it was Elon Musk, the head of X (formerly known as Twitter), who raised the concern about the use and misuse of the chatbot in the future, and it seems to be turning out to be true.

The incident further highlights the need for transparency, human oversight and robust controls in AI systems to maintain the safety and trust of the user.

ALSO READ: Google's Gemini AI App now available for iPhone: Here’s what it offers

ALSO READ: Google brings AI-powered call and app spam detection tool for users to rival Truecaller