California proposes strict AI safety rules to protect children
A ballot initiative has been filed in California proposing new regulations to protect children and teens from potential risks associated with artificial intelligence technologies.
The California Kids AI Safety Act sets out to address safety concerns related to AI companion chatbots, mobile phone use in schools, and the handling of young people's data by technology companies. If passed, the Act will introduce measures that require independent safety testing of AI products designed for minors and enforce stricter accountability for organisations that fail to comply with safety standards.
Key provisions
The proposed legislation specifically targets the use of AI-powered chatbots by minors, citing data suggesting that nearly three out of four teenagers in California now interact with such bots. Under the initiative, clear rules would be established to prevent AI chatbots from promoting or encouraging dangerous behaviour among children and adolescents.
The Act also proposes to remove mobile phones from school classrooms in an effort to promote student attention and learning. In addition, the initiative seeks to expand the California Consumer Privacy Act, increasing the minimum age at which personal data can be sold or shared by technology companies from 16 to 18 years old.
Furthermore, the measure would require independent safety audits of AI products marketed to young people and provide resources to California schools to ensure K-12 students receive education around AI literacy and safety, promoting responsible and ethical use of the technology.
Accountability and enforcement
If enacted, the California Kids AI Safety Act will introduce financial penalties for social media and AI companies found to have harmed children. The stated aim is to hold the technology sector accountable to a higher standard when it comes to the wellbeing of minors.
Campaigners behind the ballot initiative pointed to recent cases in which children have suffered harm or died after using AI chatbots. Among those cited were Sewell Setzer III, Juliana Peralta, and Adam Raine, an Orange County teenager, who died by suicide after engaging with AI companions.
Expert opinions
James P. Steyer, described as a leading children's online safety advocate, gave his full support to the proposed measure, stating: "This is the strongest kids AI safety measure ever put forward in the United States. At this pivotal moment for AI, we can't make the same mistake we did with social media, when companies used our kids as guinea pigs and created a youth mental health crisis. We need AI guardrails to protect kids and teens now."
Additional backing came from Dr. Vivek Murthy, the 19th and 21st Surgeon General of the United States, who emphasised the urgency of the issue. Dr. Murthy said: "Protecting our children from the harmful effects of AI is an urgent priority that demands action. Our experience with social media has demonstrated what happens to youth mental health when we fail to put adequate protections in place. It is our collective responsibility to ensure technology helps - not harms - our kids."
Broader context
The ballot initiative arrives amid growing scrutiny of how young people interact with digital technologies, particularly AI systems that can simulate conversations and relationships. Supporters of the Act argue that the lessons learnt from unregulated social media use among minors should inform the implementation of safety measures for AI products.
The full text of the measure has been made available for public review.