EarSE : Bringing Robust Speech Enhancement to COTS Headphones
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 158 |
Number of pages | 33 |
Journal / Publication | Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies |
Volume | 7 |
Issue number | 4 |
Publication status | Published - Dec 2023 |
Link(s)
Abstract
Speech enhancement is regarded as the key to the quality of digital communication and is gaining increasing attention in the research field of audio processing. In this paper, we present EarSE, the first robust, hands-free, multi-modal speech enhancement solution using commercial off-the-shelf headphones. The key idea of EarSE is a novel hardware setting---leveraging the form factor of headphones equipped with a boom microphone to establish a stable acoustic sensing field across the user's face. Furthermore, we designed a sensing methodology based on Frequency-Modulated Continuous-Wave, which is an ultrasonic modality sensitive to capture subtle facial articulatory gestures of users when speaking. Moreover, we design a fully attention-based deep neural network to self-adaptively solve the user diversity problem by introducing the Vision Transformer network. We enhance the collaboration between the speech and ultrasonic modalities using a multi-head attention mechanism and a Factorized Bilinear Pooling gate. Extensive experiments demonstrate that EarSE achieves remarkable performance as increasing SiSDR by 14.61 dB and reducing the word error rate of user speech recognition by 22.45-66.41% in real-world application. EarSE not only outperforms seven baselines by 38.0% in SiSNR, 12.4% in STOI, and 20.5% in PESQ on average but also maintains practicality. © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Research Area(s)
- speech enhancement, COTS device, acoustic sensing, multi-modality fusion, deep learning
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
EarSE: Bringing Robust Speech Enhancement to COTS Headphones. / DUAN, Di; CHEN, Yongliang; XU, Weitao et al.
In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 7, No. 4, 158, 12.2023.
In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol. 7, No. 4, 158, 12.2023.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review