Facebook and Instagram icons are displayed on your iPhone.
Jakub Porzycki | nuphoto | Getty Images
Meta On Wednesday, new safety features were introduced for teenagers, including enhanced direct messaging protections to prevent “exploitative content.”
Teens can see more information about who they are chatting with, including when their Instagram account was created and other safety tips to find potential scammers. Teens can also block and report accounts with a single action.
“In June alone, they blocked 1 million accounts and reported an additional million after seeing the safety notice,” the company said in the release.
The policy is part of a broader push by meta to protect teens and children on its platform after surveillance from policymakers who accused young users of not protecting them from sexual exploitation.
Meta said it deleted nearly 135,000 Instagram accounts earlier this year, which is making children sexual on its platform. It was found that the deleted accounts left sexual comments and requested sexual images from adult-controlled accounts featuring children.
Takedown also included 500,000 Instagram and Facebook accounts linked to the original profile.
Meta automatically places teen and child care accounts in the strictest messaging and comment settings, excluding offensive messages and limiting contact from unknown accounts.
Users must be at least 13 to use Instagram, but adults can run accounts representing younger children as long as they are clear that their account bio manages their account.
The platform was recently accused of implementing addictive features across families of apps that have detrimental effects on children’s mental health.
Last week, Meta announced that it had deleted around 10 million profiles by the first half of 2025, pretending to be a massive content producer, as part of the company’s efforts to combat “spam content.”
Congress has updated its efforts to regulate social media platforms to focus on child safety. The Kids Online Safety Act was reintroduced to Congress in May after stalling in 2024.
The measure requires social media platforms to have a “duty of care” to prevent products from harming children.
Snapchat was sued in New Mexico in September, claiming that the app is creating an environment where predators can easily target children via sextritation schemes.