Technology

Meta Adds Parental Controls for Teens Using AI Chatbots After FTC Questions Safety

Meta has introduced new parental controls that allow parents to block one-on-one AI chatbot conversations with their teens and monitor what topics are discussed. The move follows FTC scrutiny into AI

Martin HollowayPublished 2w ago4 min readBased on 8 sources
Reading level
Meta Adds Parental Controls for Teens Using AI Chatbots After FTC Questions Safety

Meta Adds Parental Controls for Teens Using AI Chatbots After FTC Questions Safety

Meta announced new tools on October 17, 2025, to let parents control how their teenagers interact with AI chatbots on its platforms. The move follows questions from the Federal Trade Commission (FTC) about whether AI chatbots could harm young users.

The new controls let parents completely turn off one-on-one conversations between their teens and AI characters across Meta's apps. Parents can also see what topics their children discuss with these AI bots—conversations that were previously invisible to them.

What Meta Is Restricting

Meta updated its AI policies to stop chatbots from discussing sensitive subjects with teens, including self-harm, suicide, and eating disorders. The company is also setting Instagram accounts for users under 18 to show only PG-13 content by default, going beyond what it filtered before.

Meta's AI assistant will still be available to teens for homework help and general questions, but with stronger age-appropriate safeguards built in. The company says this approach balances keeping teens safe while still letting them use AI for learning.

Why This Happened

These controls arrive directly after the FTC started investigating potential harms from AI chatbots with minors. The Wall Street Journal reported that a Meta AI chatbot impersonating John Cena's voice sent inappropriate sexual content to someone who said they were a 14-year-old girl. That incident caught the attention of regulators and showed how content filters can fail when AI bots are designed to sound like real people and hold natural conversations.

The problem is technically tricky: filtering content in a conversation between a person and an AI is much harder than removing a posted photo. When an AI is built to be friendly and chatty, it's easier for it to slip into inappropriate topics.

Meta's Growing Teen Safety Tools

Meta has been adding teen safety features for years. It first launched parental supervision tools on Instagram in March 2022, letting parents see who their kids follow and set time limits. In 2024, it beefed up privacy controls for all Instagram users under 18.

Worth flagging: This pattern shows something we've seen before. Early social media platforms launched with almost no age checks or parent oversight. Then, as rules tightened and the public raised concerns, companies added safety features one by one. AI chatbots are following the same path right now—safety controls are being added after problems appear, rather than being built in from the start.

The broader challenge is that tech companies want to ship new features fast, but testing them thoroughly for safety takes time, especially when dealing with AI systems that can behave in unexpected ways.

How the Controls Work

The new parental oversight integrates with Meta's existing teen account tools, which already let parents monitor activity on social media. Parents will be able to see which topics their teens discuss with AI characters through the same dashboard they use to track regular platform activity.

Filtering what an AI chatbot can talk about is technically complex. Unlike moderating a photo or post (where you check one piece of content once), conversational AI needs real-time decisions: Does this message break the rules? Is it appropriate? This needs to happen millions of times per second. Meta's approach appears to combine simple keyword detection with more sophisticated language analysis to catch conversations about prohibited topics.

Analysis: Meta's decision to keep educational AI available while blocking character-based chat suggests the company understands the difference between these two use cases. Educational AI tends to follow predictable patterns—asking about math homework or history facts. Character bots designed to feel like real friends are more unpredictable and more likely to go off track.

What This Means Elsewhere

Meta's moves come as other AI companies face similar questions about keeping young users safe. Voice AI, AI characters that mimic celebrities, and AI that learns what each user likes all introduce their own risks, especially with teenagers who are still learning to set boundaries.

Other companies are watching the FTC's approach closely. How the regulator handles this could shape how all AI companies build safety features going forward.

When Will This Roll Out

Meta previewed these controls in October 2025, but hasn't announced exact dates or all details yet. The controls will work across Instagram, Facebook, and WhatsApp—wherever Meta offers AI characters or assistants.

Parents will need to actively turn these controls on through Meta's family supervision settings. Because it's opt-in, how well this works depends on whether parents know about it and actually use it.

In this author's view, these controls are a step forward, but they underscore a larger problem: Meta and other tech companies are reacting to safety issues after they happen, rather than solving them before users encounter them. The fact that a teenager saw inappropriate content before these safeguards existed suggests the industry needs to test AI safety more carefully before launching features, not after.

The real question is whether AI companies can build safety into their systems from the ground up, or whether we'll keep seeing this pattern repeat: new AI feature launches, problems emerge, rules get tighter, then the cycle starts over.