Edge-aware motion based facial micro-expression generation with attention mechanism

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

1 Scopus Citations
View graph of relations

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)97-104
Journal / PublicationPattern Recognition Letters
Volume162
Online published16 Sept 2022
Publication statusPublished - Oct 2022

Abstract

Facial micro-expression (FME) refers to a brief spontaneous facial movement that can disclose a person's genuine emotion. The investigations of FMEs are hampered by the lack of data. Fortunately, generative deep neural network models can help synthesize new images with desired FMEs. However, FMEs are too subtle to capture and generate. Therefore, we developed an edge-aware motion based FME generation (EAM-FMEG) method to address these challenges. First, we introduced an auxiliary edge prediction (AEP) task for estimating facial edges to aid in the subtle feature extraction. Second, we proposed an edge-intensified multi-head self-attention (EIMHSA) module for focusing on important facial regions to enhance the generation in response to subtle changes. The method was tested on three FME databases and showed satisfactory results. The ablation study demonstrated that the method is capable of producing objects with clear edges, and is robust to texture disturbance, shape distortion, and background defects. Furthermore, the method demonstrated strong cross-database generalization ability, even from RGB to grayscale images or vice versa, enabling general applications.

Research Area(s)

  • Edge-aware motion based generation, Facial micro-expression generation, Multi-head self-attention