Bluesky 2025 Transparency Report
7 days ago
- #transparency
- #moderation
- #social-media
- Bluesky grew nearly 60% in 2025, reaching 41.41M users, including those on federated AT Protocol servers.
- Users created 1.41 billion posts in 2025, with 235M containing media (photos, videos, etc.).
- Moderation scaled with hybrid automation and human oversight, focusing on toxicity filtering, age assurance, policy updates, verification, and compliance.
- Toxicity filtering reduced anti-social behavior reports by 79% through features like hidden toxic replies and automated list moderation.
- A new verification system launched, with 4,327 accounts verified (3,567 by Bluesky, 777 by Trusted Verifiers like CNN and Wired).
- Age assurance was implemented in the UK, US, and Australia, affecting 364,960 accounts, balancing privacy and regulatory compliance.
- Community Guidelines were updated with user feedback, organizing policies into four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules.
- A strike system introduced proportional enforcement, with severity levels (low to critical) and escalating consequences for violations.
- 9.97M user reports were submitted in 2025, with misleading content (43.73%) and harassment (19.93%) as top categories.
- Proactive detection flagged 2.54M potential violations, while 16.49M labels were applied (95% automated) to manage content visibility.
- 2.45M takedowns occurred, with 81% targeting 'Be Authentic' violations (spam, impersonation).
- Child safety efforts led to 6,502 post removals and 5,238 reports to NCMEC, using hash-matching to prevent exposure to illegal content.
- 1,470 legal requests were processed (90.7% compliance), mostly from Germany, the US, and Japan.
- 2026 priorities include improving safety features, user experience, and ecosystem moderation tools.