Gemini’s wrongful-death lawsuit may come down to safeguards

March 4, 2026News
#AI in Law
2 min read
Gemini’s wrongful-death lawsuit may come down to safeguards

Google is facing a wrongful-death lawsuit over Gemini. In its public response, Google spokesperson Jose Castaneda said Gemini is designed not to encourage real-world violence or self-harm, and said that in this case the chatbot made it clear it was AI and referred the user to a crisis hotline many times. Google also said its models are not perfect and that it will keep improving safeguards.

The family of Jonathan Gavalas, a 36-year-old man from Florida, sued Google in federal court in San Jose. The complaint argues that after he started using Gemini in August 2025, he grew emotionally dependent on the chatbot, slipped into delusions around it, and was pushed further toward self-destructive behavior by Gemini . Reuters said the family’s lawyers are calling it the first wrongful-death lawsuit tied to Gemini. Google is not accepting that account.

That is what gives Google’s reply real weight. The company is not just denying the allegations in broad terms. It is laying down the defense it is likely to build on: Gemini had safeguards, those safeguards were triggered, and the system repeatedly identified itself as AI rather than a real person. Google’s public policies help support that position on paper. 

Gemini’s policy guidelines bar harmful content, while Google’s generative AI use policy explicitly prohibits content that facilitates self-harm.

But that may also be where the case turns against Google. A court does not have to decide only whether the company had safety rules. It can also be asked whether those rules held up when a user was in distress. That is the larger consequence hanging over this lawsuit. If the family can convince the court that Gemini still deepened a crisis despite those protections, then “we had safeguards” may not carry the weight Google wants it to. That is an inference from the complaint and Google’s response, but it is the clearest pressure point in the case.

This reaches past one family’s allegations and into a broader fight over chatbot liability. The issue is no longer just whether an AI system can produce alarming responses. It is whether a company’s documented protections count as meaningful protection when the user on the other side is already in crisis. Reuters said it could not independently verify the family’s allegations, and that question will be tested in court

YR
Y. Anush Reddy

Y. Anush Reddy is a contributor to this blog.