Exploring Explanation Effects on the Usage of Artificial Intelligence in Recruitment: Human Resources Professionals' Perspective

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationPACIS 2024 Proceedings
Publication statusPublished - Jul 2024

Conference

Title2024 Pacific Asia Conference on Information Systems (PACIS 2024)
PlaceViet Nam
CityHo Chi Minh City
Period1 - 5 July 2024

Abstract

Artificial intelligence (AI) is increasingly used in recruitment for its data handling and decision consistency, but human resources professionals (HRPs) remain skeptical about predictive accuracy and potential biases (e.g., only hiring males), influencing the justice of AI’s decision. Meanwhile, such advanced capabilities of AI may make HRPs worry that AI could replace their roles and threaten their identity. To address such concerns and improve the acceptance of AI, it is essential to increase the explainability of the AI. Thus, we propose classifying AI explanations into input, process, and output. Our study will determine the effect of explanation on HRPs’ reliance of AI and will explore how organizational justice and threat to identity influence HRPs’ reliance on AI usage. This research aims to clarify the psychological mechanisms affecting AI acceptance in hiring, contributing to the human-machine interaction and HR management literature.