Hasty Briefsbeta

Bilingual

Meta and TikTok let harmful content rise to drove engagement, say whistleblowers

6 hours ago
  • #algorithm
  • #whistleblowers
  • #social-media
  • Whistleblowers reveal Meta and TikTok allowed more harmful content to boost engagement after internal research showed outrage-driven content increased user interaction.
  • Meta instructed engineers to permit 'borderline' harmful content, including misogyny and conspiracy theories, to compete with TikTok, citing stock price concerns.
  • TikTok prioritized political cases over child safety reports to maintain relationships with politicians and avoid regulatory threats.
  • Instagram Reels, Meta's TikTok competitor, launched without sufficient safeguards, leading to higher rates of bullying, hate speech, and violent content.
  • Meta allocated 700 staff to grow Reels while denying safety teams additional resources for child protection and election integrity.
  • Internal documents show Facebook's algorithm prioritized profit over user well-being, amplifying harmful content that incited outrage.
  • TikTok's recommendation algorithm is a 'black box,' with engineers unaware of content specifics, relying on safety teams to remove harmful posts.
  • Teenagers reported being radicalized by algorithms, with platforms failing to filter violent and hateful content effectively.
  • TikTok whistleblower 'Nick' revealed the company deprioritized child safety cases, focusing instead on political content to avoid regulatory backlash.
  • Meta's leadership prioritized competition over safety, allowing borderline content to increase engagement, despite known risks to users.