SEML : A Semi-Supervised Multi-Task Learning Framework for Aspect-Based Sentiment Analysis
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 189287-189297 |
Journal / Publication | IEEE Access |
Volume | 8 |
Online published | 16 Oct 2020 |
Publication status | Published - 2020 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85097797647&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(36b1c518-9e29-4f74-ac3f-e31223524051).html |
Abstract
Aspect-Based Sentiment Analysis (ABSA) involves two sub-tasks, namely Aspect Mining (AM) and Aspect Sentiment Classification (ASC), which aims to extract the words describing aspects of a reviewed entity (e.g., a product or service) and analyze the expressed sentiments on the aspects. As AM and ASC can be formulated as a sequence labeling problem to predict the aspect or sentiment labels of each word in the review, supervised deep sequence learning models have recently achieved the best performance. However, these supervised models require a large number of labeled reviews which are very costly or unavailable, and they usually perform only one of the two sub-tasks, which limits their practical use. To this end, this paper proposes a SEmi-supervised Multi-task Learning framework (called SEML) for ABSA. SEML has three key features. (1) SEML applies Cross-View Training (CVT) to enable semi-supervised sequence learning over a small set of labeled reviews and a large set of unlabeled reviews from the same domain in a unified end-to-end architecture. (2) SEML solves the two sub-tasks simultaneously by employing three stacked bidirectional recurrent neural layers to learn the representations of reviews, in which the representations learned from different layers are fed into CVT, AM and ASC, respectively. (3) SEML develops a Moving-window Attentive Gated Recurrent Unit (MAGRU) for the three recurrent neural layers to enhance representation learning and prediction accuracy, as nearby contexts within a moving-window in a review can provide important semantic information for the prediction task in ABSA. Finally, we conduct extensive experiments on ABSA over four review datasets from the SemEval workshops. Experimental results show that SEML significantly outperforms the state-of-the-art models.
Research Area(s)
- Aspect-based sentiment analysis, Cross-view training, End-to-end learning, Moving-window attention, Multi-task learning, Semi-supervised learning
Citation Format(s)
SEML : A Semi-Supervised Multi-Task Learning Framework for Aspect-Based Sentiment Analysis. / LI, Ning; CHOW, Chi-Yin; ZHANG, Jia-Dong.
In: IEEE Access, Vol. 8, 2020, p. 189287-189297.Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Download Statistics
No data available