Enhancing Low-Resource NLP by Consistency Training With Data and Model Perturbations

Xiaobo Liang, Runze Mao, Lijun Wu*, Juntao Li, Min Zhang, Qing Li

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

5 Citations (Scopus)

Abstract

Natural language processing (NLP) has recently shown significant progress in rich-resource scenarios. However, it is much less effective for low-resource scenarios due to the model easily overfitting to limited training data and generalizing poorly on testing data. In recent years, consistency training has been widely adopted and shown great promise in deep learning, but still remains unexplored in low-resource settings. In this work, we propose DM-CT, a framework that incorporates both data-level and model-level consistency training as well as advanced data augmentation techniques for low-resource scenarios. Concretely, the input data is first augmented, and the output distributions of different sub-models generated by model variance are forced to be consistent (model-level consistency). Meanwhile, the predictions of the original input and the augmented one are also constrained to be consistent (data-level consistency). Experiments on different low-resource NLP tasks, including neural machine translation (4 IWSLT14 translation tasks, multilingual translation task, and WMT16 Romanian → English translation), natural language understanding tasks (GLUE benchmark), and named entity recognition (Conll2003 and WikiGold), well demonstrate the superiority of DM-CT by obtaining significant and consistent performance improvements. © 2023 IEEE.
Original languageEnglish
Pages (from-to)189-199
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume32
Online published19 Oct 2023
DOIs
Publication statusPublished - 2023

Research Keywords

  • Consistency training
  • data augmentation
  • low-resource
  • natural language processing

Fingerprint

Dive into the research topics of 'Enhancing Low-Resource NLP by Consistency Training With Data and Model Perturbations'. Together they form a unique fingerprint.

Cite this