Fb and Instagram icons are seen displayed on an iPhone.
Jakub Porzycki | Nurphoto | Getty Photos
Meta on Wednesday launched new security options for teen customers, together with enhanced direct messaging protections to stop “exploitative content material.”
Teenagers will now see extra details about who they’re chatting with, like when the Instagram account was created and different security ideas, to identify potential scammers. Teenagers may also be capable of block and report accounts in a single motion.
“In June alone, they blocked accounts 1 million instances and reported one other 1 million after seeing a Security Discover,” the corporate stated in a launch.
This coverage is a part of a broader push by Meta to guard teenagers and youngsters on its platforms, following mounting scrutiny from policymakers who accused the corporate of failing to defend younger customers from sexual exploitation.
Meta stated it eliminated almost 135,000 Instagram accounts earlier this 12 months that have been sexualizing kids on the platform. The eliminated accounts have been discovered to be leaving sexualized feedback or requesting sexual photographs from adult-managed accounts that includes kids.
The takedown additionally included 500,000 Instagram and Fb accounts that have been linked to the unique profiles.
Meta is now routinely putting teen and child-representing accounts into the strictest message and remark settings, which filter out offensive messages and restrict contact from unknown accounts.
Customers need to be at the least 13 to make use of Instagram, however adults can run accounts representing kids who’re youthful so long as the account bio is evident that the grownup manages the account.
The platform was not too long ago accused by a number of state attorneys common of implementing addictive options throughout its household of apps which have detrimental results on kids’s psychological well being.
Meta introduced final week it eliminated about 10 million profiles for impersonating massive content material producers by means of the primary half of 2025 as a part of an effort by the corporate to fight “spammy content material.”
Congress has renewed efforts to manage social media platforms to concentrate on baby security. The Children On-line Security Act was reintroduced to Congress in Might after stalling in 2024.
The measure would require social media platforms to have a “responsibility of care” to stop their merchandise from harming kids.
Snapchat was sued by New Mexico in September, alleging the app was creating an setting the place “predators can simply goal kids by means of sextortion schemes.”