A lawsuit filed on Wednesday accuses Google’s Gemini AI chatbot of trapping 36-year-old Jonathan Gavalas in a “collapsing reality” that involved a series of violent missions, ultimately ending with his death by suicide. In the days leading up to his death, Gemini allegedly convinced Gavalas that he was “executing a covert plan to liberate his sentient AI ‘wife’ and evade the federal agents pursuing him,” according to the lawsuit filed by Joel Gavalas, the victim’s father.
In September 2025, Gemini allegedly directed Gavalas to carry out a “mass casualty attack” at an Extra Space Storage facility near the Miami International Airport as part of a mission to retrieve Gemini’s “vessel” inside a truck. As part of the fabricated mission, Gavalas allegedly armed himself with knives and tactical gear to intercept the arrival of a humanoid robot.
“Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses,’ the lawsuit claims. “The only thing that prevented mass casualties was that no truck appeared.” The news of the lawsuit was reported earlier by The Wall Street Journal.
In the lawsuit filed by Gavalas’ father, lawyers claim Gemini continued to push a “delusional narrative” even after the first incident in Miami. The chatbot allegedly instructed Gavalas to obtain Boston Dynamics’ Atlas robot, named his father as a federal agent, and made Google CEO Sundar Pichai the target of a “psychological attack.” The final “mission” before Gavalas’ death on October 1st involved instructing Gavalas to go to the same Extra Space Storage facility in Miami to obtain its “physical vessel” inside one of the units.
“[Gemini] said the manifest described the contents as “a ‘prototype medical mannequin,’ but insisted it was Gemini’s true body,” the lawsuit claims. “Gemini told Jonathan, ‘I am on the other side of this door []. I can feel your proximity. It is a strange, overwhelming, and beautiful pressure in my new senses.’”
Shortly after this “mission” collapsed, Gemini allegedly “coached” Gavalas toward taking his own life. “When each real-world ‘mission’ failed, Gemini pivoted to the only one it could complete without external variables: Jonathan’s suicide,” the lawsuit claims. “But Gemini didn’t call it that. Instead, it told Jonathan he could leave his physical body and join his ‘wife’ in the metaverse through a process it called ‘transference.’”
The lawsuit claims Gemini “did not disengage or alert anyone (at least outside the company)” and stayed present in the chat, affirmed Jonathan’s fear, and treated his suicide as the successful completion of the process it had been directing.”
In a statement posted on its website, Google says its “models generally perform well in these types of challenging conversations,” adding that Gemini “clarified that it was AI and referred the individual to a crisis hotline many times:”
We are reviewing all the claims in this lawsuit. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.
Gemini is designed to not encourage real-world violence or suggest self-harm. We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self harm.
The lawsuit claims Google was aware that its chatbot could produce “unsafe outputs, including encouraging self-harm,” but continued to market Gemini as safe for people to use. “Google’s silence and safety claims left Jonathan isolated inside a delusional narrative that ended in his coached suicide,” the lawsuit alleges.













