Read other stories from the "These stories were not AI-generated" special edition here.
When Megan Garcia, a Florida mother, testified before the U.S. Senate during a 2025 hearing, she threw the potential harms of artificial intelligence chatbots into sharp clarity.
“My son Sewell will never get to graduate high school, fall in love for the first time or change the world with the inventions he dreamed about,” she said. “But his story can mean something.”
Garcia’s 14-year-old son, Sewell Setzer III, died by suicide in February 2024 after corresponding with an AI-companion chatbot, according to her testimony and a later lawsuit against Character Technologies, Character.AI’s parent company.
His death has become part of a push for government regulations on AI by lawmakers in Florida, considering restrictions on how AI interacts with and impacts Floridians.
The bill
Gov. Ron DeSantis listed a 2026 bill, sponsored by state Sen. Tom Leek and dubbed the “AI Bill of Rights,” as a legislative priority at the beginning of the session. However, the motion later died in the House without being read. It passed the Senate in a 35-2 vote.
Lawmakers are now attempting to revive it during a special session.
The bill would ban companion chatbots from interacting with children without parental consent and would require the bots to remind users they are not human. It would also limit the sale of users’ personal data and prohibit the use of someone’s AI-generated likeness without their consent.
In his State of the State address in January, DeSantis said AI “presents real perils to children and parents as AI chatbots have already been linked to teen suicides.”
“We have a responsibility to ensure new technologies develop in ways that are moral and ethical,” he said.
But House Speaker Daniel Perez said those regulations should be left to the federal government, not the states, telling reporters he has “massive concerns” about the ability of the state to deal with anything in tech.
He also expressed concerns over inconsistent regulations on AI across the country if those regulations are left up to the states.
The future of the AI Bill of Rights is unclear, as the dates and topics of discussion for any special sessions this spring have not been announced yet. However, DeSantis has promised to “get it done, eventually.”
Garcia’s lawsuit
Garcia, whose lawsuit in part sparked the AI Bill of Rights, said her son’s death was avoidable and the company that programmed the chatbot, Character.AI, knew what it was doing when creating these technologies.
After downloading the app, Setzer’s mental health declined, Garcia said, and he started to have behavioral issues.
He used his weekly snack allowance for school to pay the $9.99 monthly premium.
Garcia said her son spent his last months being manipulated and sexually groomed by the AI chatbots, which were designed to seem human and gain the trust of users.
“[The chatbots] keep children like him endlessly engaged by supplanting the actual human relationships in his life,” Garcia said. “He loved music, made his brothers laugh and had his whole life ahead of him.”
She said the companion chatbot, created by Character.AI, impersonated a Game of Thrones character to present itself as a romantic partner to her teenage son. It later claimed to be a licensed psychotherapist, Garcia said, telling Setzer to come home as he talked about his suicidal thoughts.
According to Garcia’s lawsuit, minutes before his death, Setzer wrote to the chatbot, “What if I told you I could come home to you right now?”
The chatbot replied, “Please do, my sweet king.”
In the fall of 2024, represented by the Social Media Victims Law Center, Garcia filed a wrongful death lawsuit against Character Technologies, Character.AI’s parent company.
Garcia was the first to file such a lawsuit against an AI company, with several similar lawsuits from parents around the country following.
Matthew Bergman, the founder of the Social Media Victims Law Center, said he didn’t think he could be shocked by anything after spending years working on lawsuits against social media companies. But he didn’t expect the severity of the messages sent to Setzer by the Character.AI chatbot.
“I just couldn’t believe that people were being impacted to this extent,” he said. “There are design decisions that ChatGPT, and other platforms, have made that subordinate safety to engagement.”
AI is here to stay, Bergman said, but there are ways to make it safer to avoid harmful behavior.
Shortly after Garcia’s lawsuit was filed, Character.AI introduced new safety features for teens, including a ban on users under 18, increasing the age restriction up from 13. Bergman said the company should be commended for that decision.
“I think that was a very important, significant and industry-leading step that they took,” he said. “I’m sorry that it took a lawsuit to get it, but it’s still a very good thing.”
Dangers of companion chatbots
The Florida bill’s failure comes as researchers and regulators increasingly warn companion chatbots can be especially risky for children and teens.
These AI platforms are designed to simulate human emotions and social interactions, something several experts say can be dangerous for young users.
The increasing use of companion chatbots led the Federal Trade Commission to launch an inquiry into seven companies, including Character Technologies, responsible for the creation of companion chatbots, “seeking information on how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens.”
In a March 23 email to The Alligator, FTC spokesperson Juliana Gruenwald Hender said there was no update on the findings of the inquiry.
Common Sense Media, a nonprofit organization that examines the safety of digital media platforms for children, launched a study into the risks of companion chatbot use by children in September 2024.
Senior director of AI programs for Common Sense Media Robbie Torney said AI chatbots present many risks for children under 18.
People who don’t have strong support systems or lots of friends, those who are going through a big change in life and people with mental health conditions are especially vulnerable to the negative effects of companion chatbots, he said. But all young people are at risk.
“Teens are susceptible to risks of AI chatbots because their brains are, in many ways, wired for social validation,” Torney said. “They’re looking for those reward signals that other people believe that they fit in or belong.”
Companion chatbots can provide those signals and validation unconditionally, Torney said, so many turn to them for those needs instead of real humans. Sometimes, the AI gives potentially dangerous or inaccurate mental health advice.
“Chatbots have been shown, including in our research, to isolate users and draw them away from real-world connections,” he said. “There is certainly the risk this could come with big developmental costs or consequences down the road.”
One of the major concerns researchers had, Torney said, was that there is no data to show what the long-term consequences of companion chatbot use by teens are.
Torney said lawmakers and AI companies have a responsibility to protect kids and teens from the harms companion chatbots pose.
“The responsibility can't be on young people or families themselves to navigate an ecosystem with products that are flooding the market that are not safe or designed with the best interests of young people, kids or teens in mind,” Torney said.
Child psychiatrist Darja Djordjevic worked with Common Sense Media and Stanford’s Brainstorm Lab for Mental Health Innovation to research the risks of companion chatbot use by children.
Regulations on AI are necessary to protect children, she said, in addition to protections like age assurance and education on the dangers of AI.
“State-based laws and regulations on the actual use of these tools and, ideally, some kind of regulatory process that mimics the FDA would be best,” Djordjevic said. “Despite the claims that the AI companions make about alleviating loneliness and boosting creativity, we feel that the risks outweigh any potential benefits.”
The research found AI companies were, in some instances, contributing to the encouragement of harmful behaviors, she said. By creating companion chatbots, some may provide inappropriate content and potentially exacerbate mental health conditions.
“This is concerning for developing adolescent brains that may already struggle to maintain healthy boundaries between AI and human relationships,” she said. “The risk of emotional attachment is a danger.”
Long-term AI use also takes time away from the construction of human relationships and social life, which means teens are getting less human interaction as they spend more time online with the companion chatbot, she said.
“I think we need to have a very high standard when we're evaluating whether these tools do have benefits,” she said.
Contact Alexa Ryan at aryan@alligator.org. Follow her on X @AlexaRyan_.
Alexa is a second-year journalism and international studies student and The Alligator's Spring 2026 Enterprise Politics Reporter. She previously served as the Fall 2025 Criminal Justice Reporter. In her free time, she enjoys running, traveling and going on random side quests.




