Facial Micro-Expression Recognition Using Double-Stream 3D Convolutional Neural Network with Domain Adaptation
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 3577 |
Journal / Publication | Sensors |
Volume | 23 |
Issue number | 7 |
Online published | 29 Mar 2023 |
Publication status | Published - Apr 2023 |
Link(s)
DOI | DOI |
---|---|
Attachment(s) | Documents
Publisher's Copyright Statement
|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85152315808&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(dcb36ac8-be08-4f36-9a96-ca5d9e259956).html |
Abstract
Humans show micro-expressions (MEs) under some circumstances. MEs are a display of emotions that a human wants to conceal. The recognition of MEs has been applied in various fields. However, automatic ME recognition remains a challenging problem due to two major obstacles. As MEs are typically of short duration and low intensity, it is hard to extract discriminative features from ME videos. Moreover, it is tedious to collect ME data. Existing ME datasets usually contain insufficient video samples. In this paper, we propose a deep learning model, double-stream 3D convolutional neural network (DS-3DCNN), for recognizing MEs captured in video. The recognition framework contains two streams of 3D-CNN. The first extracts spatiotemporal features from the raw ME videos. The second extracts variations of the facial motions within the spatiotemporal domain. To facilitate feature extraction, the subtle motion embedded in a ME is amplified. To address the insufficient ME data, a macro-expression dataset is employed to expand the training sample size. Supervised domain adaptation is adopted in model training in order to bridge the difference between ME and macro-expression datasets. The DS-3DCNN model is evaluated on two publicly available ME datasets. The results show that the model outperforms various state-of-the-art models; in particular, the model outperformed the best model presented in MEGC2019 by more than 6%. © 2023 by the authors.
Research Area(s)
- 3D-CNN, domain adaptation, micro-expression recognition, optical flow
Citation Format(s)
Facial Micro-Expression Recognition Using Double-Stream 3D Convolutional Neural Network with Domain Adaptation. / Li, Zhengdao; Zhang, Yupei; Xing, Hanwen et al.
In: Sensors, Vol. 23, No. 7, 3577, 04.2023.
In: Sensors, Vol. 23, No. 7, 3577, 04.2023.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Download Statistics
No data available