EarSE: Bringing Robust Speech Enhancement to COTS Headphones

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

12 Citations (Scopus)

Abstract

Speech enhancement is regarded as the key to the quality of digital communication and is gaining increasing attention in the research field of audio processing. In this paper, we present EarSE, the first robust, hands-free, multi-modal speech enhancement solution using commercial off-the-shelf headphones. The key idea of EarSE is a novel hardware setting---leveraging the form factor of headphones equipped with a boom microphone to establish a stable acoustic sensing field across the user's face. Furthermore, we designed a sensing methodology based on Frequency-Modulated Continuous-Wave, which is an ultrasonic modality sensitive to capture subtle facial articulatory gestures of users when speaking. Moreover, we design a fully attention-based deep neural network to self-adaptively solve the user diversity problem by introducing the Vision Transformer network. We enhance the collaboration between the speech and ultrasonic modalities using a multi-head attention mechanism and a Factorized Bilinear Pooling gate. Extensive experiments demonstrate that EarSE achieves remarkable performance as increasing SiSDR by 14.61 dB and reducing the word error rate of user speech recognition by 22.45-66.41% in real-world application. EarSE not only outperforms seven baselines by 38.0% in SiSNR, 12.4% in STOI, and 20.5% in PESQ on average but also maintains practicality. © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Original languageEnglish
Article number158
Number of pages33
JournalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume7
Issue number4
DOIs
Publication statusPublished - Dec 2023

Bibliographical note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Funding

The work was also supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 21201420 and CityU 11201422). The work was also partially supported by CityU APRC grant 9610471, CityU MFPRC grant 9680333, CityU SIRG grant 7020057, CityU SRG-Fd grant 7005666 and 7005984.

Research Keywords

  • speech enhancement
  • COTS device
  • acoustic sensing
  • multi-modality fusion
  • deep learning

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'EarSE: Bringing Robust Speech Enhancement to COTS Headphones'. Together they form a unique fingerprint.

Cite this