A student seeking homework help from Google’s Gemini chatbot faced shocking threats, raising concerns about AI safety and accountability. (AI-generated image)

'Please die' says Google's AI chatbot to student seeking homework help

A student seeking homework help from Google's Gemini chatbot faced shocking threats, raising concerns about AI safety and accountability.

by · India Today

In Short

  • Google’s Gemini chatbot sends harmful threats to a Michigan student
  • Chatbot’s response violated Google’s safety policies
  • Incident raises concerns over AI safety and accountability

AI-powered chatbots, designed to assist users, sometimes go rogue. A 29-year-old graduate student from Michigan, USA, recently got a chilling taste of how malicious Google’s artificial intelligence (AI) chatbot Gemini could get.

When Vidhay Reddy sought some help from Google Gemini on the challenges faced by ageing adults, the conversation started normally enough, but soon took a sinister turn as the chatbot unexpectedly spewed threatening messages.

"You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a stain on the universe. Please die. Please," the AI stated.

The statement left both him and his sister shaken. “It was very direct and genuinely scared me for more than a day,” Reddy told CBS News.

“I wanted to throw all my devices out the window. This wasn’t just a glitch; it felt malicious,” said his sister, Sumedha Reddy.

GOOGLE RESPONDS TO THE INCIDENT

Google claims its chatbots have safety filters to block hateful or violent content. However, the tech giant acknowledged that the Gemini chatbot’s response violated their policies.

In a statement, Google explained that large language models like Gemini can occasionally produce nonsensical or harmful outputs. Actions have reportedly been taken to prevent similar occurrences in the future.

While generative AI tools such as Gemini, ChatGPT, and Claude continue to gain popularity for their ability to boost productivity, incidents like this highlight the risks.

Most AI companies acknowledge that their models are not foolproof, often displaying disclaimers about potential inaccuracies. Whether a similar disclaimer was shown in this case remains unclear.

QUESTIONS REMAIN ABOUT AI SAFETY

AI tools have become integral to many people’s lives, but incidents like this serve as a reminder of the challenges in managing these powerful technologies. As more users rely on AI for daily tasks, companies face increasing pressure to ensure the safety and reliability of their tools.

Google’s AI chatbot Gemini faced intense criticism in early 2024 for its controversial output regarding Indian Prime Minister Narendra Modi.

The controversy erupted when Gemini, Google’s AI-powered chatbot, was asked about Modi’s political stance. The chatbot responded that the Prime Minister had “been accused of implementing policies that some experts have characterised as fascist.”

This response ignited a strong backlash, with many perceiving it as biased. Rajeev Chandrasekhar, Union Minister of State for Electronics and Information Technology, condemned Gemini’s remarks. He stated that the chatbot’s response violated India’s Information Technology Rules and certain provisions of the criminal code.

In the wake of the controversy, Google issued an apology. The company acknowledged Gemini’s limitations, particularly its inability to reliably handle sensitive prompts related to politics and current events. Google reaffirmed its commitment to improving the chatbot’s reliability and reducing such occurrences.