How can we reverse AI bias for inclusive policies?

Christ MAHONY*, Matthew MANNING, Gabriel WONG, Varalakshmi VARALAKSHMI, José CUESTA

*Corresponding author for this work

Research output: Other OutputsRGC 64A - Other outputspeer-review

Abstract

The use of algorithms is increasingly common, from the media content we are presented online to our creditworthiness. However, too often, these algorithms not only replicate, but actually amplify bias, with discriminatory consequences. While commercial incentives have driven an explosion in the private sector’s use of machine learning (ML), governments have been slower to explore how it can mitigate inequality of outcomes between social groups.

A recently developed Cost-Benefit Analysis (CBA) tool informs justice intervention selection through comprehensive data collection, resource allocation, and measurement of societal benefits, particularly for marginalized groups. There is exciting and increasing potential for this approach to enhance policy decision-making, social sustainability, and public participation in policy processes.
Original languageEnglish
PublisherThe World Bank
Publication statusPublished - 25 Jan 2024

Bibliographical note

Information for this record is supplemented by the author(s) concerned.

Research Keywords

  • cost-benefit analysis
  • AI
  • inequality
  • justice reform

Fingerprint

Dive into the research topics of 'How can we reverse AI bias for inclusive policies?'. Together they form a unique fingerprint.

Cite this