Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fuelled engagement, whistleblowers told the BBC. More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention. An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more 'borderline' harmful content - including misogyny and conspiracy theories - in user feeds to compete with TikTok. 'They sort of told us that it's because the stock price is down,' the engineer said.

A TikTok employee provided access to the company's internal dashboards, illustrating how staff prioritized cases involving political figures over reports of harmful content, particularly those involving children. Decisions were being made to maintain a strong relationship with political figures instead of focusing on user risks.

The whistleblowers' accounts reveal a desperate race among social media companies to keep users engaged, even at the expense of safety, leading to a troubling normalization of harmful behavior and speech among younger audiences. They warn that algorithms are primarily designed to maximize engagement, often amplifying negativity, which in turn can foster harmful behaviors and ideologies.

Reflecting on the shift within Meta, insiders noted that the launch of Instagram Reels in 2020 came without sufficient safeguards, which resulted in a higher prevalence of bullying, hate speech, and violence compared to other platforms. Furthermore, testimonies reveal that corners were cut in safety measures to ramp up ad revenues in response to competitive pressures from TikTok.

In stark contrast to their public image, internal operations revealed prioritization of engagement and profit, enabling inflammatory and harmful content to overshadow protective measures for users. TikTok and Meta’s approaches raise serious ethical questions about the responsibilities of social media platforms in safeguarding their users, especially vulnerable populations such as children and teenagers.