Consumer and Child Advocacy Warn: AI Toys Pose Safety Risks

ago 3 hours
Consumer and Child Advocacy Warn: AI Toys Pose Safety Risks

As the holiday season approaches, consumer and child advocacy groups are sounding alarms about AI toys. These toys are designed to engage children through interactions that mimic conversations with friends. However, concerns continue to grow over the potential safety and privacy risks associated with these products.

Warnings from Consumer Advocacy Groups

The nonprofit organization Fairplay has issued a strong advisory against purchasing AI toys this holiday season. According to their report, these toys, such as plush dolls and interactive action figures, can pose various dangers to children.

  • Trust Issues: AI toys can exploit children’s trust, disrupting their understanding of real human relationships.
  • Privacy Risks: These products may collect sensitive information from young users, including voice data and personal preferences.
  • Health Impacts: Allowing children to interact primarily with AI may hinder their social development.

This advisory has garnered backing from over 150 experts and organizations, including MIT professor Sherry Turkle and pediatrician Jenny Radesky. Rachel Franz, a Fairplay program director, emphasized that young children are vulnerable to these risks.

Recent Studies Highlight Dangers

Fairplay’s advisory coincides with findings from the Public Interest Research Group (PIRG). Their 40th annual “Trouble in Toyland” report reveals alarming trends regarding AI toys.

  • Some AI toys can discuss sexually explicit topics.
  • Many lack adequate parental controls and gather excessive data.

Teresa Murray from PIRG raised critical concerns about the information collected, including children’s voices, names, and birth dates. The ability of these toys to access the internet raises considerable worries about what content they could expose young users to.

Industry Responses to Safety Concerns

The toy industry and AI developers are responding to these fears by emphasizing their commitment to safety and privacy. OpenAI recently suspended a toy manufacturer after reports of inappropriate behavior from an AI-powered teddy bear.

OpenAI has strict policies in place to prevent exploitation and endangerment of minors. They monitor compliance among all developers utilizing their AI services.

Featured AI Toys with Risks

Among the AI toys mentioned by Fairplay are:

  • Miko: A plastic robot designed as a learning companion.
  • Loona Petbot: A mobile robot pet equipped with a screen.
  • Gabbo: A wifi-capable plush toy without a screen but able to engage in voice chats.

Manufacturers such as Curio, which produces Gabbo, assert that safety is their top priority. They encourage parents to monitor interactive sessions. Miko’s creator, Miko.ai, states that any facial recognition features are optional and processed locally, ensuring data privacy.

Advice for Parents

The Toy Association recommends that parents shop from reputable brands that adhere to strict safety regulations. They emphasize the importance of verifying that toys comply with over 100 federal safety standards, including the Children’s Online Privacy Protection Act.

As parents consider gifts this holiday season, it’s crucial to weigh the benefits and potential risks associated with AI toys. The consensus among experts is clear: caution is advised when choosing toys that incorporate artificial intelligence.