Social media giants made decisions which allowed more harmful content on people's feeds, after internal research into their algorithms showed how outrage fueled engagement, whistleblowers told the BBC.

More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail, and terrorism as they battled for users' attention.

An engineer at Meta described how senior management instructed him to allow more “borderline” harmful content, including misogyny and conspiracy theories, to compete with TikTok.

He noted this decision stemmed from concern over stock prices.

A TikTok employee revealed troubling internal complaints dashboards, showing that cases involving high-profile political figures were prioritized over reports of harmful content affecting children.

This was described as an effort to maintain relationships with politicians and avoid regulatory repercussions.

The whistleblowers also indicate that internal metrics showed that TikTok's rival Instagram Reels launched with inadequate safety mechanisms, resulting in higher rates of bullying, hate speech, and violence.

The whistleblowers assert a consistent pattern where both companies' algorithms, instead of ensuring user safety, amplify content that risks user well-being in pursuit of monetization.

In defense, Meta stated any suggestion of deliberately intensifying harmful content for profit was unfounded, while TikTok labeled these claims as fabricated. However, growing evidence highlights the existing dilemmas in algorithmic practices against user welfare.