US regulator probes AI chatbots over child safety concerns

BSS
Published On: 12 Sep 2025, 10:05

NEW YORK, Sept 12, 2025 (BSS/AFP) - The US Federal Trade Commission announced Thursday it has launched an inquiry into AI chatbots that act as digital companions, focusing on potential risks to children and teenagers.

The consumer protection agency issued orders to seven companies -- including tech giants Alphabet, Meta, OpenAI and Snap -- seeking information about how they monitor and address negative impacts from chatbots designed to simulate human relationships.

"Protecting kids online is a top priority for" the FTC, said Chairman Andrew Ferguson, emphasizing the need to balance child safety with maintaining US leadership in artificial intelligence innovation.

The inquiry targets chatbots that use generative AI to mimic human communication and emotions, often presenting themselves as friends or confidants to users.

Regulators expressed particular concern that children and teens may be especially vulnerable to forming relationships with these AI systems.

The FTC is using its broad investigative powers to examine how companies monetize user engagement, develop chatbot personalities, and measure potential harm.

The agency also wants to know what steps firms are taking to limit children's access and comply with existing privacy laws protecting minors online.

Companies receiving orders include Character.AI, Elon Musk's xAI Corp, and others operating consumer-facing AI chatbots.

The investigation will examine how these platforms handle personal information from user conversations and enforce age restrictions.

The commission voted unanimously to launch the study, which does not have a specific law enforcement purpose but could inform future regulatory action.

The probe comes as AI chatbots have grown increasingly sophisticated and popular, raising questions about their psychological impact on vulnerable users, particularly young people.

Last month the parents of Adam Raine, a teenager who committed suicide in April at age 16, filed a lawsuit against OpenAI, accusing ChatGPT of giving their son detailed instructions on how to carry out the act.

Shortly after the lawsuit emerged, OpenAI announced it was working on corrective measures for its world-leading chatbot.

The San Francisco-based company said it had notably observed that when exchanges with ChatGPT are prolonged, the chatbot no longer systematically suggests contacting a mental health service if the user mentions having suicidal thoughts.

  • Latest
  • Most Viewed
US Charg‚ d'affaires calls on Bangladesh's ICT Ministry Special Assistant
Scrutiny on Thai zoo grows after lion attack
Farmers are benefiting from farming vegetables in Rangpur region
EC launches voter registration for expatriates in Canada
Bangladesh envoy presents credentials to EU leadership
Death toll from Nepal protest violence rises to 51: police
Japan centenarians reach record high at nearly 100,000 
Light to moderate rain, thunder showers likely in parts of country
Vatican wishes free, fair and transparent elections in Bangladesh
Loji Begum defeats poverty, grips self-reliance with dairy farm 
১০