Attention-based Parallel Multiscale Convolutional Neural Network for Visual Evoked Potentials EEG Classification

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

8 Scopus Citations
View graph of relations

Author(s)

  • Zhongke Gao
  • Xinlin Sun
  • Mingxu Liu
  • Weidong Dang
  • Chao Ma

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)2887-2894
Journal / PublicationIEEE Journal of Biomedical and Health Informatics
Volume25
Issue number8
Online published16 Feb 2021
Publication statusPublished - Aug 2021

Abstract

Electroencephalography (EEG) decoding is an important part of Visual Evoked Potentials-based Brain-Computer Interfaces (BCIs), which directly determines the performance of BCIs. However, long-time attention to repetitive visual stimuli would cause physical and psychological fatigue, resulting in weaker reliable response and stronger noise interference, which exacerbates the difficulty of Visual Evoked Potentials EEG decoding. In this state, subjects' attention could not be concentrated enough and the frequency response of their brains becomes less reliable. To solve these problems, we propose an attention-based parallel multiscale convolutional neural network (AMS-CNN). Specifically, the AMS-CNN first extract robust temporal representations via two parallel convolutional layers with small and large temporal filters respectively. Then, we employ two sequential convolution blocks for spatial fusion and temporal fusion to extract advanced feature representations. Further, we use attention mechanism to weight the features at different moments according to the output-related interest. Finally, we employ a full connected layer with softmax activation function for classification. Two fatigue datasets collected from our lab are implemented to validate the superior classification performance of the proposed method compared to the state-of-the-art methods. Analysis reveals the competitiveness of multiscale convolution and attention mechanism. These results suggest that the proposed framework is a promising solution to improving the decoding performance of Visual Evoked Potential BCIs.

Research Area(s)

  • attention mechanism, Brain modeling, Brain-Computer Interface (BCI), Convolution, convolutional neural network, Convolutional neural networks, Electroencephalography, fatigue, Fatigue, Feature extraction, Visual Evoked Potentials, Visualization

Citation Format(s)

Attention-based Parallel Multiscale Convolutional Neural Network for Visual Evoked Potentials EEG Classification. / Gao, Zhongke; Sun, Xinlin; Liu, Mingxu; Dang, Weidong; Ma, Chao; Chen, Guanrong.

In: IEEE Journal of Biomedical and Health Informatics, Vol. 25, No. 8, 08.2021, p. 2887-2894.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review