Wed. Nov 12th, 2025

From today’s Electronic Privacy Information Center:

Nearly three in four teens report using AI chatbots, and about one in three teen users of AI chatbot report feeling uncomfortable with something an AI chatbot has included in an output, a Common Sense Media survey recently reported. To the children and teens using them, these technologies are not an experiment, even if companies still treat them like one. As chatbots are deployed to hundreds of millions of kids and teens, there have been widespread documented harms including mental health issuesfinancial harmmedical harmemotional dependencemanipulation and deceptionpsychosisdelusional thinkingself-harm and suicidebias reinforcement, and anger or impulsive actions. Enforcers and policymakers now face a familiar challenge: applying longstanding laws to emerging technologies.

Recognizing the urgency of these issues, we’ve come together – a team of privacy experts and former enforcers – to outline how existing laws can meet emerging AI risks to kids and teens. The “How Existing Laws Apply to AI Chatbots for Kids and Teens” reference guide offers a practical overview of how existing legal frameworks can address emerging risks associated with chatbots used by or directed toward minors. 

It underscores a key point: There is no AI exemption in the law. Federal and state consumer protection, data privacy, and data security statutes continue to apply, even as “new” technologies reshape how harms manifest in our lives.

Read the complete story here.

By Editor