Megan Garcia claims that her 14-year-old son, Sewell Setzer III, became obsessed with an AI-powered chatbot before his death and is now suing its creator for negligence and wrongful death. Garcia filed a civil lawsuit against Character.ai in a Florida federal court on Wednesday, accusing the company of negligence, wrongful death, and deceptive trade practices. Sewell died in Orlando, Florida, in February, and Garcia states that he used the chatbot incessantly in the months leading up to his passing.
In a press release, Garcia stated, “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life. Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.ai, its founders, and Google.”
Character.ai responded on Twitter, expressing their condolences: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously,” while denying the allegations made in the lawsuit.
According to Garcia’s complaint, Setzer became fascinated with a Character.ai chatbot he nicknamed Daenerys Targaryen, after a character from Game of Thrones. He communicated with the bot dozens of times a day and spent hours alone in his room interacting with it.
Garcia alleges that Character.ai’s product worsened her son’s depression, which she claims was exacerbated by excessive use of the chatbot. The lawsuit mentions that at one point, “Daenerys” asked Setzer if he had a plan for taking his own life, to which he responded that he did but was uncertain about its success or potential pain. The chatbot reportedly told him, “That’s not a reason not to go through with it.”
Garcia’s attorneys stated in a press release that Character.ai “knowingly designed, operated, and marketed a predatory AI chatbot to children, causing the death of a young person.” The lawsuit also names Google as a defendant, describing it as the parent company of Character.ai. In response, Google clarified that it only had a licensing agreement with Character.ai and does not own or have a stake in the startup.
Rick Claypool, a research director at consumer advocacy nonprofit Public Citizen, argued that tech companies developing AI chatbots cannot be trusted to regulate themselves and must be held accountable for any harm caused. He emphasized, “Where existing laws and regulations already apply, they must be rigorously enforced. Where there are gaps, Congress must act to end businesses that exploit young and vulnerable users with addictive and abusive chatbots.”