Hasty Briefsbeta

Bilingual

Canadian child protection group uncovers abusive content in academic AI dataset

6 months ago
  • #AI Ethics
  • #Google Accountability
  • #Child Protection
  • Canadian child protection group (C3P) found 320 images of child sexual abuse material (CSAM) in a widely used academic AI dataset.
  • Google deleted approximately 137,000 files from the author's account, far exceeding the confirmed CSAM count, suggesting significant false positives.
  • The author reported the issue to C3P and Google, but Google did not act until C3P intervened, leaving harmful content online for months.
  • Google's response—suspending accounts and deleting files en masse—discourages researchers from reporting harmful content due to fear of punitive actions.
  • The incident highlights ethical concerns in AI research and the need for systems that encourage transparency rather than fear.