Hasty Briefsbeta

We investigated Amsterdam's attempt to build a 'fair' fraud detection model

3 days ago
  • #welfare fraud detection
  • #AI fairness
  • #algorithmic bias
  • Lighthouse investigated welfare fraud detection algorithms in five European countries, finding discrimination against vulnerable groups.
  • Amsterdam developed a machine learning model to predict incorrect welfare applications, aiming for fairness and transparency.
  • The model used an Explainable Boosting Machine (EBM) with 15 features, avoiding explicit demographic data but acknowledging potential proxies.
  • Training data from past investigations introduced biases, with over 50% labeled 'investigation worthy' compared to 7% in real-world cases.
  • Amsterdam tested fairness using various definitions: Statistical Parity, False Discovery Rate, False Positive Share, and False Positive Rate.
  • Initial model showed bias against non-Dutch citizens; reweighting training data reduced this but introduced new biases against women and Dutch applicants.
  • Pilot deployment revealed deteriorated performance and emergent biases, leading Amsterdam to shelve the project.
  • Tradeoffs between fairness definitions and groups were unavoidable, highlighting the complexity of building fair AI systems.
  • Data access challenges due to privacy laws required aggregated results, limiting full transparency and auditability.