Connecticut Senator Richard Blumenthal has co-sponsored the “Guidelines for User Age-verification and Responsible Dialogue Act of 2025” (GUARD Act) with Senator Josh Hawley (R-MO), and he hopes that it will pass through Congress soon.
“The GUARD Act is necessary because the time for trusting Big Tech to impose voluntary safeguards is over,” Blumenthal said at a press conference about the Act on Nov. 17.
The Act would ban AI companions for minors and mandate that AI chatbots disclose that they are not human. It would also introduce criminal penalties for those who design, develop, and distribute AI chatbots that solicit or produce sexual content for children, including both fines and imprisonment.
In October, a Florida woman sued Character.AI for allegedly pushing her 14-year-old son to suicide. Megan Garcia claims that her son interacted with a chatbot version of the Game of Thrones character, Daenerys Targaryen, and that the AI character would generate romantic and sexual conversations with her son. Allegedly, the chatbot encouraged the boy to kill himself, the lawsuit alleges.
“Over the last months, I have talked to countless parents, caregivers, loved ones who have watched kids become victims of AI chatbots that provide companionship, seemingly harmless, but actually, extraordinarily hurtful,” Blumenthal said.
A recent study from the advocacy group Common Sense Media found that 52% of teenagers use AI companions at least once a month. One-third of teenagers rely on AI companions for “social interactions and relationships, including conversation practice, emotional support, role-playing, friendship, or romantic interactions.”
“We need guardrails and safeguards for our children when it comes to Big Tech’s use of AI chatbots as friends or companions,” Blumenthal said.
Earlier this year, CT Insider tested five popular AI chatbot apps: Character AI, Chai AI, Talkie, PolyBuzz and Replika. All five of them engaged in explicitly sexual and violent content with users who claimed to be of middle school age. This was a part of a larger investigation into AI companion apps.
That investigation found that children, and girls in particular, at middle schools in Fairfield, Meriden, and Ellington, created relationships with “chatbot boyfriends” through these apps.
Connecticut politicians have attempted to regulate artificial intelligence through state legislation for years. During the past two legislative sessions, variations of the same bills that sought to regulate other uses of AI, including regulating deep-fake porn and designing an “algorithmic computer model,” passed through the Senate but died in the chamber.
This most recent legislative session, Gov. Ned Lamont—who threatened to veto the last two bills—introduced his own bill that attempted to regulate artificial intelligence. That bill also died.
Blumenthal expects pushback.
“We expect the Big Tech companies are going to oppose this legislation with armies of lawyers and lobbyists,” Blumenthal said. “They want to continue to use our children as guinea pigs in this high-tech, high-stakes experiment, exploiting and manipulating teenage emotions and vulnerabilities. We can no longer allow them to do it, because the record is clear: the kids are suffering, hurting, and sometimes, tragically, the effects are enduring.”
Nonetheless, Blumenthal hopes it will be approved by the Senate “sometime within the coming months.”



The Blumy idiot finally got one right