A pruning-then-quantization model compression framework for facial emotion recognition

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

  • Han Sun (Co-first Author)
  • Tao Li
  • Jiayu Zhao

Detail(s)

Original languageEnglish
Pages (from-to)225-236
Journal / PublicationIntelligent and Converged Networks
Volume4
Issue number3
Online publishedSept 2023
Publication statusPublished - Sept 2023

Link(s)

Abstract

Facial emotion recognition achieves great success with the help of large neural models but also fails to be applied in practical situations due to the large model size of neural methods. To bridge this gap, in this paper, we combine two mainstream model compression methods (pruning and quantization) together, and propose a pruning-then-quantization framework to compress the neural models for facial emotion recognition tasks. Experiments on three datasets show that our model could achieve a high model compression ratio and maintain the model's high performance well. Besides, We analyze the layer-wise compression performance of our proposed framework to explore its effect and adaptability in fine-grained modules. © All articles included in the journal are copyrighted to the ITU and TUP.

Research Area(s)

  • facial emotion recognition, model compression, Resnet

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Download Statistics

No data available