Towards a Trust and Context-based Right to Explanation of Automated Decision-making Algorithms in China

  • HE, Tianxiang (Principal Investigator / Project Coordinator)
  • DING, Xiaodong (Co-Investigator)
  • YU, Ronald Ker-Wei (Co-Investigator)

Project: Research

Project Details

Description

We are now living in an era of artificial intelligence (AI) and big data, in which algorithmsplay a critical role in making many important decisions for humans that affect our daily lives.There is no doubt that the wide application of decision-making algorithms in areas such as ride-hailing,autonomous driving, and purchase recommendation brings benefits such as higherefficiency and better service. However, such algorithms also introduce serious challenges forlegal systems owing to the black box issue of AI. First, human autonomy is threatened ifdecision-making algorithms take over the role of humans in making binding decisions, yetthose decisions cannot be properly explained. Second, personal freedom is encroached upon,as many algorithm-based recommendation systems rely on the collected user-preference data,thereby potentially generating information cocoons. Third, when algorithms are trained onincomplete datasets, they can introduce discrimination and bias. In response to the above risks, two solutions have been proposed by scholars to address theabove issues: mandatory algorithm disclosure to address the black box issue, and the creationof the right to explanation to increase personal freedom. However, for mandatory disclosure,whether full disclosure is practical and desirable is a valid question. In addition, issues aboutthe content, extent, timing, and manner of the right to explanation provided by national lawsremain highly controversial and unsettled. This research argues that the right to explanationshould base on contextual integrity and trust theories rather than individual rights mode. Thisresearch aims to fill the gap by answering the following question: what is the proper design ofthe right to explanation in China? In answering that question, this project will address four sub-issuesusing a mixed-methods approach: First, the problems related to the individual right routeof the right to explanation as provided by the laws in the EU and China; Second, whether theright to explanation path can be substituted by an external regulatory model; Third, if not, howshould the right to explanation be reconstructed with trust and context-based theories to avoidthe identified problems? Fourth, explore the uniqueness of the commercial sectors in Chinathat will affect the design of the right to explanation. This research suggests the right to explanation should be a dynamic, communicative,relativistic procedural right supported by trust and context-based theories. The researchfindings will help to illuminate general patterns of algorithm regulation beyond studiesfocusing on specific jurisdictions. 
Project number9043626
Grant typeGRF
StatusActive
Effective start/end date1/01/24 → …

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.