How I Made $5000 in the Stock Market

Meta Platforms Introduces New Parental Controls for Teen AI Use

Oct 17, 2025 15:17:00 -0400 by Angela Palumbo | #AI

Meta Platforms says parental protections for teen AI use are coming. (Dreamstime)

Key Points

Meta Platforms announced new ways for parents to monitor and intervene in their teens’ interactions with artificial intelligence chatbots, as concerns for children’s safety when it comes to AI use intensifies.

Meta said in a blog post Friday that it’s building new controls that will let parents watch and manage how their teenage children interact with AI characters on its apps. These changes will begin with Instagram, and will start to roll out early next year. Parents will then be able to turn off their teens’ access to one-on-one chats with chatbots and block access to specific AI characters. Facebook and Instagram require users to be ages 13 and up.

“We hope today’s updates bring parents some peace of mind that their teens can make the most of all the benefits AI offers, with the right guardrails and oversight in place,” Meta said in the announcement.

Therapist Loraine Moorehead of Lorain Moorehead Therapy and Consultation told Barron’s that while this update isn’t a perfect fix to the problem of children’s interactions with AI chatbots, it’s a start.

“It’s the bare minimum, but it’s also one important step,” Moorehead said. “Besides age restrictions and parents being able to set those limitations, I’m not sure what else they can do, but I think it is important that they do what they can.”

The American Psychological Association published a health advisory in June that called for safeguards to be put in place on AI chatbots as it said AI conversations have the risk of displacing or interfering with the development of teens’ healthy, real-world relationships.

“Early research indicates that strong attachments to AI-generated characters may contribute to struggles with learning social skills and developing emotional connections,” the health advisory said.

There have also been reports of harmful interactions that teens have had with AI chatbots already. Reuters reported in August that Meta allowed its chatbots to have romantic conversations with minors. Meta changed its AI chatbot policies after the report. On Sept. 10, the Federal Trade Commission launched an inquiry into tech companies like Meta, Snap, Alphabet, and OpenAI to learn more about the impact AI chatbots have on children.

“AI interactions add an emotional layer that feels personal, but isn’t really human,” Karishma Patel Buford, psychologist and chief people officer of digital mental health company Spring Health, said in a statement to Barron’s. “Teens heavily rely on AI for emotional support or validation. This can blur boundaries and reinforce isolation, instead of fostering real-world connections.”

Studies show that AI use is common among young people. According to a July report from researcher Common Sense Media, 72% of teens have used AI companions at least once, and 52% qualify as regular users who interact with these platforms at least a few times a month.

Other social media platforms outside of Meta also have their own chatbots, like Snapchat’s My AI and xAI’s Grok. But as these bots are relatively new, it’s not surprising that new safeguards for children are still being implemented. For now, parents are the first line of defense when it comes to teen interactions with chatbots, Moorehead said.

“Parents need to stay ahead of the curve, just read and become informed however they can about the different AI tools and electronics that kids might be using,” she said. “I think that has long been a great standard for parents. Just to know these are the apps that the kids might be using, these are some of the safeguards that we see as important, these are some of the things to be aware of, and to try and familiarize themselves, because everybody starts somewhere.”

Write to Angela Palumbo at angela.palumbo@dowjones.com