Advertisement

Many of Instagram’s teen-safety tools ‘simply are not working’, study finds

Only eight of 47 features to prevent teens accessing harmful content and related to contact with strangers were fully effective, report says

Reading Time:4 minutes
Why you can trust SCMP
Young people use mobile phones on a train in Hong Kong. A recent report claims that many of Instagram’s features meant to stop young users from accessing harmful content can be easily circumvented. Photo: Jelly Tse

Numerous safety features that Meta has said it has implemented to protect young users on Instagram over the years do not work well or, in some cases, do not exist, according to a report from child safety advocacy groups that was corroborated by researchers at Northeastern University in Boston, in the United States.

The study, which Meta disputed as misleading, comes amid renewed pressure on tech companies to protect children and other vulnerable users of their social media platforms.

Of 47 safety features tested, the groups judged only eight to be completely effective. The rest were either flawed, “no longer available or were substantially ineffective”, the report stated.

Advertisement

Features meant to prevent young users from surfacing content related to self-harm by blocking search terms were easily circumvented, the researchers reported.

Meta called the study’s findings erroneous and misleading. Photo: Reuters
Meta called the study’s findings erroneous and misleading. Photo: Reuters
Anti-bullying message filters also failed to activate, even when prompted with the same harassing phrases Meta had used in a press release promoting them. And a feature meant to redirect teens from bingeing on self-harm-related content never triggered, the researchers found.
Advertisement
Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x