Failures in Instagram's security features for teenagers, reveals report from researchers.


The growing concern for the safety of teenagers on social media has led researchers from Northeastern University to examine the safety functions implemented by Meta on Instagram. A recent report by child safety advocacy groups argues that these measures are ineffective or, in some cases, nonexistent.
A Study Under Scrutiny
The report, supported by organizations like the Molly Rose Foundation in the UK and Parents for Safe Online Spaces in the United States, notes that out of 47 evaluated safety measures, only eight were found to be completely effective. The other mechanisms had flaws, were disabled, or were notably ineffective.
According to the analysis, Instagram's attempts to restrict young users' access to self-harm-related content through a search term blocking system were easy to bypass. This raises serious doubts about the actual effectiveness of the tools designed to protect teenagers on the platform.
Deficiencies in Harassment Filters
Additionally, the study highlights that the harassment message filters were not activated even when specific phrases mentioned by Meta in a promotional press release were used. This significant failure increases the risk of young people being exposed to harmful and potentially dangerous content.
The tests also revealed that a feature intended to limit exposure to self-harm-related content never activated, underscoring the gaps in the protections that Instagram should be providing to its younger audience.
An Alarming Background
The report, titled "Teen Accounts, Broken Promises," compiles and assesses updates to safety and wellbeing features for young users that Instagram has publicly announced over the past decade. The organizations involved have a tragic background; they were founded by parents who claim that their children died due to harassment and self-harm content on Meta's platforms.
Laura Edelson, a professor at Northeastern University and supervisor of the review, questions the company's claims regarding its commitment to protecting teenagers from the most harmful aspects of its platform.
Meta's Response
Meta has characterized the report's findings as erroneous and misleading. A company spokesperson, Adam Stone, indicated that teenagers using the protections encountered less sensitive content, experienced fewer unwanted contacts, and spent less time on Instagram at night. Despite the criticism, the representative stated that the company will continue to improve its tools and value constructive feedback that allows it to do so.
Insights from a Former Security Executive
The study was backed by insights from Arturo Bejar, a former security executive at Meta. Bejar worked at the company until 2015 and returned as a consultant for Instagram from 2019 to 2021. During his time at the company, he claimed that Meta did not respond adequately to data indicating significant security issues for teenagers on its platform.
Bejar emphasizes that Instagram's security features are flawed and notes that the company's lack of response to these alerts has contributed to a potentially dangerous environment for teenagers.
Worrying Conclusions
The combination of deficiencies found in Instagram's safety features and the lack of effective actions by Meta raises concerns about the protection of younger users on social media. This report adds to the mounting pressure that tech companies face to create a safer environment for all users, especially teenagers, who are particularly vulnerable to online dangers.
As discussions about online safety continue, it is imperative that social media platforms implement robust and effective measures to protect their young users. Parents, educators, and activists will continue to demand greater transparency and accountability from companies, while young people keep seeking a safe space in the digital world.
To delve deeper into this topic and other relevant aspects of technology and online safety, readers are invited to continue exploring the content of this blog.