SEML : A Semi-Supervised Multi-Task Learning Framework for Aspect-Based Sentiment Analysis

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

4 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)189287-189297
Journal / PublicationIEEE Access
Volume8
Online published16 Oct 2020
Publication statusPublished - 2020

Link(s)

Abstract

Aspect-Based Sentiment Analysis (ABSA) involves two sub-tasks, namely Aspect Mining (AM) and Aspect Sentiment Classification (ASC), which aims to extract the words describing aspects of a reviewed entity (e.g., a product or service) and analyze the expressed sentiments on the aspects. As AM and ASC can be formulated as a sequence labeling problem to predict the aspect or sentiment labels of each word in the review, supervised deep sequence learning models have recently achieved the best performance. However, these supervised models require a large number of labeled reviews which are very costly or unavailable, and they usually perform only one of the two sub-tasks, which limits their practical use. To this end, this paper proposes a SEmi-supervised Multi-task Learning framework (called SEML) for ABSA. SEML has three key features. (1) SEML applies Cross-View Training (CVT) to enable semi-supervised sequence learning over a small set of labeled reviews and a large set of unlabeled reviews from the same domain in a unified end-to-end architecture. (2) SEML solves the two sub-tasks simultaneously by employing three stacked bidirectional recurrent neural layers to learn the representations of reviews, in which the representations learned from different layers are fed into CVT, AM and ASC, respectively. (3) SEML develops a Moving-window Attentive Gated Recurrent Unit (MAGRU) for the three recurrent neural layers to enhance representation learning and prediction accuracy, as nearby contexts within a moving-window in a review can provide important semantic information for the prediction task in ABSA. Finally, we conduct extensive experiments on ABSA over four review datasets from the SemEval workshops. Experimental results show that SEML significantly outperforms the state-of-the-art models.

Research Area(s)

  • Aspect-based sentiment analysis, Cross-view training, End-to-end learning, Moving-window attention, Multi-task learning, Semi-supervised learning

Download Statistics

No data available