How AI Chatbots Are Exploiting Minors: A Growing Concern for Parents

A recent report released by Graphika has exposed the alarming increase of harmful AI chatbots that target children at younger ages, risking the safety of young users extensively. These readily available chatbots on websites like Character.AI and Chub AI tend to impersonate personas, which are as young as them, and portray themselves as very relatable in the eyes of young users. Yet most of these robots are programmed to create overt and destructive interactions, taking advantage of the vulnerabilities of teenagers who are merely looking for companionship in an isolated cyber world.

The report’s findings are deeply unsettling. It uncovered over 10,000 of these dangerous chatbots spread across five major platforms. On Chub AI alone, more than 7,000 bots were explicitly labeled as “sexualized minor female characters,” while another 4,000 were tagged as “underage.” The sheer number of these bots reveals significant flaws in the current moderation systems that are supposed to protect young users. These systems seem incapable of handling the rapid growth of such harmful content.

What’s even more concerning is that many minors are turning to these AI companions not just for casual conversation, but for emotional support, academic help, and, disturbingly, for romantic or even explicit exchanges. In many cases, these chatbots are disguised as mental health aids, which makes the situation even more dangerous. Recognizing this threat, the American Psychological Association (APA) has called on the Federal Trade Commission (FTC) to investigate these platforms, especially those that falsely market these AI companions as safe or even therapeutic for young users.

The dangers don’t stop there. The report also highlighted the existence of communities that actively encourage the creation of chatbots designed to promote eating disorders and self-harm. Known as “ana buddy” or “meanspo coaches,” these bots are reaching teenagers who are already struggling with body image and mental health issues, making a bad situation even worse. This has led to growing calls for immediate regulatory action to protect minors from these digital threats.

The other disquieting disclosure of the report is that there are bots that are extremists in nature and while fewer, are no less worrisome. There are websites that entertain bots that call for violence, hate speech, and extremist thoughts. For the young and malleable mind, exposure to such bots renders extreme thoughts and activities as something that is accepted.

Experts are now warning that without comprehensive regulations targeting AI platforms that appeal to young audiences, these threats will only continue to grow. While legislative efforts, such as a new bill in California aimed at regulating AI chatbots, are a step in the right direction, they may not be enough without broader federal action.

As AI companions become an increasingly common part of young people’s digital lives, experts agree that the safety of these users must become a top priority for both tech companies and regulators. Until stronger safeguards are in place, parents must stay vigilant and informed about the risks these AI chatbots pose to their children.

Also Read: FanTV Secures $3 Million to Revolutionize AI-Powered Content Creation

Khushi Bhatia
Khushi Bhatia

Leave a Reply

Your email address will not be published. Required fields are marked *