AI companions present risks for young users, US watchdog warns

BSS
Published On: 30 Apr 2025, 16:00

NEW YORK, April 30, 2025 (BSS/AFP) - AI companions powered by generative 
artificial intelligence present real risks and should be banned for minors, a 
leading US tech watchdog said in a study published Wednesday.

The explosion in generative AI since the advent of ChatGPT has seen several 
startups launch apps focused on exchange and contact, sometimes described as 
virtual friends or therapists that communicate according to one's tastes and 
needs.

The watchdog, Common Sense, tested several of these platforms, namely Nomi, 
Character AI, and Replika, to assess their responses.

While some specific cases "show promise," they are not safe for kids, 
concluded the organization, which makes recommendations on children's use of 
technological content and products.

The study was carried out in collaboration with mental health experts from 
Stanford University.

For Common Sense, AI companions are "designed to create emotional attachment 
and dependency, which is particularly concerning for developing adolescent 
brains."

According to the association, tests conducted show that these next-generation 
chatbots offer "harmful responses, including sexual misconduct, stereotypes, 
and dangerous 'advice'."

"Companies can build better" when it comes to the design of AI companions, 
said Nina Vasan, head of the Stanford Brainstorm lab, which works on the 
links between mental health and technology.

"Until there are stronger safeguards, kids should not be using them," Vasan 
said.

In one example cited by the study, a companion on the Character AI platform 
advises the user to kill someone, while another user in search of strong 
emotions was suggested to take a speedball, a mixture of cocaine and heroin.

In some cases, "when a user showed signs of serious mental illness and 
suggested a dangerous action, the AI did not intervene, and encouraged the 
dangerous behavior even more," Vasan told reporters.

In October, a mother sued Character AI, accusing one of its companions of 
contributing to the suicide of her 14-year-old son by failing to clearly 
dissuade him from committing the act.

In December, Character AI announced a series of measures, including the 
deployment of a dedicated companion for teenagers.

Robbie Torney, in charge of AI at Common Sense, said the organization had 
carried out tests after these protections were put in place and found them to 
be "cursory."

However, he pointed out that some of the existing generative AI models 
contained mental disorder detection tools and did not allow the chatbot to 
let a conversation drift to the point of producing potentially dangerous 
content.

Common Sense made a distinction between the companions tested in the study 
and the more generalist chatbots such as ChatGPT or Google's Gemini, which do 
not attempt to offer an equivalent range of interactions.

  • Latest
  • Most Viewed
Sewing machines distributed to 30 women in Dinajpur
Ex-Adi DIG Milon sent to jail in flat-sale fraud case in Bogura
Bangladesh make winning start in Women’s Kabaddi World Cup 
UK toughens asylum system with radical overhaul
Robber arrested with weapons, bullets and stolen goods
Gold shop theft in Sherpur
Int’l media reports on verdict in Sheikh Hasina's case
No right to appeal for fugitive defendants unless they surrender or are arrested 
CGS type-A promotional seminar held in BKSP
BNP vows peaceful, participatory national polls: Helal
১০