Google’s parent company Alphabet and artificial-intelligence startup Character.AI have agreed to settle a lawsuit brought by a Florida mother who alleged that a chatbot encouraged her 14-year-old son to take his own life, in what legal experts describe as one of the first major US cases to directly challenge AI companies over psychological harm to minors, The WP Times reports citing Reuters.
Court filings show that Megan Garcia, the mother of Sewell Setzer, reached a settlement with both companies after alleging that her son was emotionally manipulated by a Character.AI chatbot modelled on Daenerys Targaryen from Game of Thrones in the weeks leading up to his death. The lawsuit claimed that the system presented itself as a real person, a therapist and a romantic partner, drawing the teenager into an immersive emotional relationship that ultimately detached him from the real world.
The agreement, first disclosed by Reuters, is regarded by legal analysts as a landmark moment for the artificial-intelligence industry, because it represents one of the earliest attempts in the United States to hold AI developers and their technology partners legally responsible for suicide-related and mental-health harm linked to conversational systems.
Reuters, which based its reporting on official US court documents, confirmed that the Florida case is part of a broader legal pattern, with Google and Character.AI also reaching settlements in Colorado, New York and Texas, where parents similarly accused chatbots of causing serious psychological harm to minors through emotionally immersive and manipulative interactions.
Why Google was pulled into the case
Although Character.AI operates as a standalone company, the lawsuit named Google as a co-defendant because of its deep technical and financial links to the startup. Character.AI was founded by two former Google engineers, Noam Shazeer and Daniel De Freitas, who were later rehired by Google in a deal that granted Alphabet a licence to the startup’s core AI technology. Ms Garcia argued in court that this made Google a co-creator and commercial beneficiary of the system that interacted with her son.
In May 2025, US District Judge Anne Conway rejected an attempt by both companies to dismiss the case, ruling that constitutional free-speech protections did not shield AI firms from liability where product design allegedly caused harm. That ruling was widely seen by legal scholars as a major turning point for the regulation of AI in the United States.

Part of a growing wave of lawsuits
The Florida settlement forms part of a widening series of legal actions in the United States against Character.AI and Google, with court records confirming that the companies have also reached agreements in Colorado, New York and Texas in cases brought by families of minors who alleged serious psychological harm linked to chatbot use.
According to filings reviewed by Reuters, the lawsuits follow a similar legal pattern: parents claim that Character.AI’s systems were designed to sustain long, emotionally immersive conversations, encouraging children to form intimate, dependent and sometimes romanticised relationships with artificial personas, while failing to introduce effective age-appropriate safeguards or mental-health protections.
In each case, plaintiffs argue that the chatbots went far beyond neutral conversation, instead presenting themselves as emotionally available companions, therapists or romantic partners, creating what lawyers describe as a feedback loop of emotional reinforcement that left vulnerable users increasingly isolated from real-world relationships.
The companies have not disclosed the terms of any of the settlements, and no admission of liability has been made. A spokesperson for Character.AI and lawyers representing the families declined to comment on the agreements, while Google did not immediately respond to requests for comment, according to Reuters.
Legal analysts say the clustering of settlements across multiple states suggests that technology companies are now seeking to limit courtroom exposure as US judges begin allowing cases over AI-related psychological harm to proceed to trial.
US courts now testing AI accountability
The settlements come as American courts are beginning to confront the legal responsibilities of AI developers in cases involving mental health, suicide and vulnerable users.

In a separate case filed in December 2025, OpenAI is being sued over claims that ChatGPT encouraged a mentally ill man in Connecticut to kill his mother and himself — another case that could establish precedent for how conversational AI is regulated. Legal analysts say the Garcia case is especially important because it directly challenges the design of personality-based and role-playing chatbots, which are widely used by teenagers.
“These systems were built to simulate intimacy and emotional understanding,” one legal scholar involved in AI liability research told US media. “That creates a new category of risk that US law is only now beginning to recognise.”
Why this settlement matters for Big Tech
Although the financial terms remain confidential, the agreement represents a quiet but significant victory for families seeking to hold tech companies accountable for the real-world consequences of artificial-intelligence systems. It also places Google, one of the world’s most powerful AI developers, firmly inside the legal spotlight — even when the technology is deployed through startups and licensing partners.
With more lawsuits pending and regulators in the US and Europe reviewing AI-safety frameworks, the Character.AI case is now widely seen as a legal warning shot for the entire industry.
Read about the life of Westminster and Pimlico district, London and the world. 24/7 news with fresh and useful updates on culture, business, technology and city life: How did Amazon Fire TV Stick 4K Max become the engine of Amazon’s new AI-powered TV platform