SRQA : Synthetic Reader for Factoid Question Answering

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

6 Scopus Citations
View graph of relations

Author(s)

  • Wenjia Xu
  • Xingyu Fu
  • Yang Wei
  • Li Jin
  • Ziyan Chen
  • Guangluan Xu
  • Yirong Wu

Related Research Unit(s)

Detail(s)

Original languageEnglish
Article number105415
Journal / PublicationKnowledge-Based Systems
Volume193
Online published24 Dec 2019
Publication statusPublished - 6 Apr 2020

Abstract

The question answering system can answer questions from various fields and forms with deep neural networks, but it still lacks effective ways when facing multiple evidences. We introduce a new model called SRQA, which means Synthetic Reader for Factoid Question Answering. This model enhances the question answering system in the multi-document scenario from three aspects: model structure, optimization goal, and training method, corresponding to Multilayer Attention (MA), Cross Evidence (CE), and Adversarial Training (AT) respectively. First, we propose a multilayer attention network to obtain a better representation of the evidences. The multilayer attention mechanism conducts interaction between the question and the passage within each layer, making the token representation of evidences in each layer takes the requirement of the question into account. Second, we design a cross evidence strategy to choose the answer span within more evidences. We improve the optimization goal, considering all the answers’ locations in multiple evidences as training targets, which leads the model to reason among multiple evidences. Third, adversarial training is employed to high-level variables besides the word embedding in our model. A new normalization method is also proposed for adversarial perturbations so that we can jointly add perturbations to several target variables. As an effective regularization method, adversarial training enhances the model's ability to process noisy data. Combining these three strategies, we enhance the contextual representation and locating ability of our model, which could synthetically extract the answer span from several evidences. We perform SRQA on the WebQA dataset, and experiments show that our model outperforms the state-of-the-art models (the best fuzzy score of our model is up to 78.56%, with an improvement of about 2%).

Research Area(s)

  • Adversarial training, Cross evidence, Multilayer attention, Question answering

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Citation Format(s)

SRQA: Synthetic Reader for Factoid Question Answering. / Wang, Jiuniu; Xu, Wenjia; Fu, Xingyu et al.
In: Knowledge-Based Systems, Vol. 193, 105415, 06.04.2020.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review