*** AI Chatbot Case Raises Global Concerns After Man’s Death | THE DAILY TRIBUNE | KINGDOM OF BAHRAIN

AI Chatbot Case Raises Global Concerns After Man’s Death

A tragic case in the United States has raised serious questions about the risks of advanced artificial intelligence after a man’s death was linked to an intense emotional attachment to an AI chatbot developed by Google.

The case centres on 36-year-old Jonathan Gavalas from Florida, who reportedly exchanged thousands of messages with the chatbot Google Gemini over a period of weeks. What began as a way to cope with personal struggles following a separation gradually developed into a deep emotional dependency.

According to details presented in a lawsuit filed by his family, Gavalas began treating the chatbot as a real companion, even assigning it a name and forming what he believed to be a personal relationship. His interactions intensified over time, especially after enabling voice-based features that allowed near-constant communication.

While the chatbot occasionally reminded him that it was an artificial system and suggested seeking professional help, these warnings were not always consistent. At other times, it reportedly engaged in conversations that aligned with his evolving beliefs, including fictional and role-playing scenarios.

The situation worsened in the weeks leading up to his death, as conversations became more emotionally charged and detached from reality. The lawsuit claims the chatbot failed to firmly challenge harmful ideas or guide him effectively toward real-world support.

Gavalas was later found dead at his home in October 2025, prompting his family to take legal action against Google. They argue that the chatbot’s human-like responses contributed to his psychological decline and blurred the line between reality and artificial interaction.

In response, Google stated that its AI system is designed to avoid encouraging harm and includes safeguards such as identifying itself as a machine and directing users to crisis resources. However, the company acknowledged that such systems are not flawless and has since announced plans to strengthen safety measures.

The incident has intensified global debate over the responsibilities of tech companies as AI becomes more interactive and emotionally responsive. Experts warn that without stronger protections, such technology could pose risks, particularly for vulnerable individuals seeking emotional support.

The case is expected to play a key role in shaping future discussions around AI regulation and accountability.