Meta to Use AI for Identifying Underage Instagram Users, Tightens Rules

| 2025-06-01 | Spotlight
Meta, Instagram, AI, teen safety, online privacy, age verification, social media rules, Meta AI, Instagram teens, child safety online

Share

In a significant move to enhance teen safety online, Meta Platforms has announced a new artificial intelligence (AI) driven initiative to identify underage users on Instagram proactively. The initiative marks a tougher stance by the social media giant amid growing global concerns about the impact of social media on children’s mental health.

Meta, which owns Instagram, revealed that it will now use AI not just to verify user ages during sign-up but to continuously monitor accounts that may have falsified their birthdates. If the AI suspects a user has misrepresented their age, their account will automatically be shifted to a “teen account,” which comes with stricter privacy and content controls.

How Meta’s AI Will Work

The AI system will evaluate multiple signals including the type of content users interact with, when the account was created, and profile information to estimate a user’s true age. If discrepancies are detected, the AI will intervene, ensuring that potentially underage users are subject to teen-specific restrictions.

Teen accounts on Instagram are by default private, limiting interactions to only approved followers. Direct messages can only be exchanged between users who follow each other. Sensitive content, such as violent videos or posts promoting cosmetic procedures, will be restricted. Additionally, teens will receive a notification if they spend more than 60 minutes on the platform. A “sleep mode” will also be activated from 10 PM to 7 AM, during which notifications are paused, and automatic replies are sent for direct messages.

Strengthening Parental Controls

In addition, Meta intends to alert parents and offer advice on how to have a conversation with them about the significance of giving their kids proper age information online. This action is a component of larger initiatives to get parents more involved in their kids’ social media use.

Addressing Global Scrutiny

The timing of Meta’s new actions coincides with increased scrutiny of social media companies for the impact their platforms have on the mental health of young people. To protect children online, governments in a number of nations are either considering or have already implemented age verification laws, though many of these measures are facing legal obstacles.

Meta and other tech firms have advocated for shifting the responsibility of age verification to app stores, arguing that they should play a more direct role in preventing underage access to social platforms.

A Broader Context of Concern

The steps taken by Meta reflect increasing societal anxieties around how platforms like Instagram may expose teenagers to inappropriate content or extended screen time, both of which have been linked to negative mental health outcomes. While Meta asserts that these new AI tools are necessary for user safety, the broader debate over online child protection continues to evolve globally.

Also Read: Uber Gains as BluSmart Exits, Customers and Investors Shift

Leave the first comment