Improving Dialog System Grounded with Unstructured Knowledge by Domain Adaptive Pre-Training and Post-Ranking

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number2150019
Journal / PublicationInternational Journal of Humanoid Robotics
Volume18
Issue number6
Publication statusPublished - Dec 2021

Abstract

Linguistic intelligence and the ability to converse with human are important and indispensable parts of humanoid robots. One of the most challenging tasks in knowledge-grounded task-oriented dialog systems (KTDS) is the knowledge selection task, which aims to find the proper knowledge snippets to respond to user dialog requests. In this paper, we first propose domain adapted-BERT (DA-BERT) which employs pre-trained bidirectional encoder representations from transformers (BERT) with domain adaptive training and dynamic masking probability for knowledge selection in KTDS. Domain adaptive training can minimize the domain gap between the general text data that BERT is pre-trained on and the dialog-knowledge joint data while dynamic masking probability enhances the training in an easy-to-hard manner. After knowledge selection, the next task in KTDS is knowledge-grounded generation. To improve the performance in knowledge-grounded generation, we propose GPT-PR to employ post-ranking on the generator's outputs. Post-ranking eliminates the possibility of generating hallucination response by a large portion during the sampling-based decoding process and thus can improve the quality of the generated response. Experimental results on the benchmark dataset show that our proposed pre-training and post-ranking methods, DA-BERT and GPT-PR, respectively, outperform the state-of-the-art models with large margins across all the evaluation metrics. Moreover, in the experiments, we also analyze the bad cases of DA-BERT and GPT-PR and do visualizations to facilitate further research in this direction.

Research Area(s)

  • Domain adaptive training, dynamic masking probability, knowledge selection, knowledge-grounded generation, post-ranking