Hasty Briefsbeta

  • #Human Rights
  • #AI
  • #Wikimedia
  • The Wikimedia Foundation believes access to knowledge is a human right and aims to ensure free and open access to reliable information.
  • A 2024 Human Rights Impact Assessment (HRIA) on AI and ML was conducted to understand their effects on human rights within the Wikimedia ecosystem.
  • Generative AI and large language models (LLMs) present both opportunities and challenges for information creation, access, and distribution.
  • Wikimedia has used AI/ML tools since 2010 for tasks like vandalism detection and citation flagging, but generative AI raises new questions.
  • Key questions include AI's role in knowledge sharing, protecting accuracy, and ensuring AI tools support rather than replace human contributions.
  • The HRIA report identifies potential risks and opportunities but notes no actual harms have occurred yet.
  • Risks include biases in AI tools, harmful content generation, and downstream impacts from using Wikimedia content in LLM training.
  • The report recommends monitoring risks and leveraging existing data-quality initiatives to mitigate potential harms.
  • The Foundation and volunteer communities are already implementing strategies to address these risks.
  • Community feedback and collaboration are essential for effectively implementing the HRIA's recommendations.
  • Upcoming discussions and translation efforts aim to engage the global Wikimedia community in addressing AI/ML challenges.