Amsterdam's high-stakes experiment to create fair welfare AI
a year ago
- #welfare fraud
- #AI ethics
- #algorithmic bias
- Amsterdam's 'Smart Check' algorithm was designed to evaluate welfare applicants for potential fraud but faced significant criticism for bias and fairness issues.
- The algorithm initially showed bias against migrants and men, leading to wrongful flags for investigation, despite efforts to recalibrate it.
- Amsterdam followed ethical AI guidelines, including bias tests and public consultations, but the system still failed to meet fairness and effectiveness standards.
- The pilot program was ultimately discontinued due to persistent bias and inefficiency, reverting to a human-led process that also has documented biases.
- The case highlights broader debates about the feasibility of 'responsible AI' in public services and the ethical implications of algorithmic decision-making.