Technology

Lawmakers Move to Regulate AI-Powered Toys as Market Grows

Lawmakers in the U.S. are introducing legislation to restrict AI-powered children's toys, even as the global market for these products expands rapidly. The FTC and CPSC are investigating safety and pr

Martin HollowayPublished 16h ago5 min readBased on 16 sources
Reading level
Lawmakers Move to Regulate AI-Powered Toys as Market Grows

Lawmakers Move to Regulate AI-Powered Toys as Market Grows

Bipartisan legislation is targeting AI-powered children's toys even as the market for these products expands rapidly worldwide. U.S. Rep. Blake Moore (R-UT) introduced the AI Children's Toy Safety Act, which would make selling toys with built-in chatbots—AI systems designed to carry on conversations—a violation of federal consumer safety law. The bill would ban companies from making, importing, or selling toys or childcare products that incorporate these conversational AI systems.

State legislatures are moving in parallel. California State Senator Steve Padilla introduced Senate Bill 867, proposing a four-year moratorium on selling and manufacturing toys with embedded AI chatbots. These legislative efforts come as Wired reports over 1,500 AI toy companies were registered in China by October 2025, with major toy makers continuing to develop products in this space.

The Market Is Growing Rapidly

Commercial activity in AI toys is accelerating despite regulatory attention. Sharp released its PokeTomo talking AI toy in Japan in April 2026, while Huawei's Smart HanHan plush toy sold 10,000 units in China in its first week. Miko, based in Mumbai, claims to have sold over 700,000 units of AI toy products globally.

Established toy manufacturers are treating AI as a core strategy, not a temporary trend. Mattel announced a partnership with OpenAI, the company behind ChatGPT, signaling that major players view AI integration as central to their future product lines.

These AI toys target a wide age range. Products tested by consumer advocates include models marketed to children as young as two years old, powered by AI systems like OpenAI's ChatGPT, according to reporting by the Associated Press.

Government Agencies Are Investigating

The Federal Trade Commission has opened multiple investigations into AI toys. The agency has issued orders to seven companies that develop AI chatbots for consumers, asking them to explain how they measure, test, and monitor potential harms to children and teenagers.

The FTC has already taken enforcement action against Apitor Technology, a robot toy maker, for collecting children's personal data—including location information—without getting parental permission first. This violated COPPA, a federal rule that requires companies to obtain parental consent before collecting data on children.

The Consumer Product Safety Commission is also engaged. It held a forum on AI and machine learning in March 2021 and published a report titled "Investigation of Smart Toys and Additional Toys through Child Observations" in October 2024.

Current toy safety testing requirements apply only to products designed primarily for children twelve years old or younger. These standards are part of an established framework called ASTM F963 that manufacturers must follow.

Companies Are Adding Safety Features

Facing regulatory pressure, some manufacturers are implementing new controls. Miko announced it would add an on/off toggle to its AI chatbot features in its Miko 3 and Miko Mini toys, allowing parents to turn the conversational AI on or off. The company made this change after political scrutiny and investigations into its products.

This move has not satisfied critics. Senator Blackburn called Miko's parental controls an "eleventh-hour attempt" to address concerns following a cybersecurity incident that exposed children's data. Miko's CEO Sneh Vaswani has stated that the company did not leak user data and does not store children's voice recordings.

The Toy Association, which represents manufacturers, is working with member companies to develop voluntary AI guidelines specific to the toy industry. The group plans to meet with government officials and Congress in April and June 2026 to discuss AI priorities on behalf of its members.

Why This Moment Matters

The regulatory focus on AI toys reflects broader concerns about AI systems interacting with children. The timing is worth noting. Toy Story 5, scheduled for summer 2026 release, features a frog-shaped kids' tablet as the main antagonist—a narrative choice that may draw public attention to the very issues lawmakers are now targeting.

This pattern is familiar from technology history. Mobile apps faced similar scrutiny around children's data and inappropriate content, but that happened years after smartphones became common in homes. What is different here is that lawmakers are acting while the AI toy market is still growing, rather than waiting for widespread adoption.

Child advocacy groups are actively advising parents to avoid these products. Fairplay, an organization focused on children's welfare, published an advisory urging parents not to buy AI toys during the holiday season, signed by more than 150 organizations and individual experts.

The deeper tension here is between the speed at which manufacturers can develop and release AI toys and the speed at which policy and safety standards can keep pace. Major manufacturers like Mattel see AI toys as strategically important to their business, while international markets—particularly China—continue rapid development with minimal regulatory constraint. The bills currently being proposed would effectively stop AI toys from being sold in the U.S., at least for now. How this conflict resolves will likely determine whether AI toys become a tightly regulated category—like other technologies for children—or face more fundamental restrictions that reshape the entire market.