Instagram's tools designed to protect teenagers from harmful content are failing to stop them from seeing suicide and self-harm posts, a study has claimed. Researchers reported that the social media platform, owned by Meta, encourages children 'to post content that received highly sexualized comments from adults.' A test by child safety groups found 30 out of 47 safety tools for teens on Instagram to be 'substantially ineffective or no longer there.' Meta disputed the findings, stating their protections have decreased harmful content exposure for teens.

The report by Cybersecurity for Democracy and various child safety groups criticized Instagram's safety measures, indicating only eight of the 47 tools were fully effective, allowing harmful content, including posts about 'demeaning sexual acts.' The study alleged the Instagram algorithm incentivizes risky behaviors for likes, prompting Meta’s critics to suggest the changes seem more like PR stunts than genuine efforts in online safety. Despite claims of improvement in user safety, experts warn these tools are far from adequate.