Projects per year
Abstract
Speech enhancement is regarded as the key to the quality of digital communication and is gaining increasing attention in the research field of audio processing. In this paper, we present EarSE, the first robust, hands-free, multi-modal speech enhancement solution using commercial off-the-shelf headphones. The key idea of EarSE is a novel hardware setting---leveraging the form factor of headphones equipped with a boom microphone to establish a stable acoustic sensing field across the user's face. Furthermore, we designed a sensing methodology based on Frequency-Modulated Continuous-Wave, which is an ultrasonic modality sensitive to capture subtle facial articulatory gestures of users when speaking. Moreover, we design a fully attention-based deep neural network to self-adaptively solve the user diversity problem by introducing the Vision Transformer network. We enhance the collaboration between the speech and ultrasonic modalities using a multi-head attention mechanism and a Factorized Bilinear Pooling gate. Extensive experiments demonstrate that EarSE achieves remarkable performance as increasing SiSDR by 14.61 dB and reducing the word error rate of user speech recognition by 22.45-66.41% in real-world application. EarSE not only outperforms seven baselines by 38.0% in SiSNR, 12.4% in STOI, and 20.5% in PESQ on average but also maintains practicality. © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
| Original language | English |
|---|---|
| Article number | 158 |
| Number of pages | 33 |
| Journal | Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies |
| Volume | 7 |
| Issue number | 4 |
| DOIs | |
| Publication status | Published - Dec 2023 |
Bibliographical note
Research Unit(s) information for this publication is provided by the author(s) concerned.Funding
The work was also supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 21201420 and CityU 11201422). The work was also partially supported by CityU APRC grant 9610471, CityU MFPRC grant 9680333, CityU SIRG grant 7020057, CityU SRG-Fd grant 7005666 and 7005984.
Research Keywords
- speech enhancement
- COTS device
- acoustic sensing
- multi-modality fusion
- deep learning
RGC Funding Information
- RGC-funded
Fingerprint
Dive into the research topics of 'EarSE: Bringing Robust Speech Enhancement to COTS Headphones'. Together they form a unique fingerprint.-
GRF: Pushing the Boundaries of Wearable Sensing: A Tale of Two Modalities
XU, W. (Principal Investigator / Project Coordinator) & Ma, D. (Co-Investigator)
1/01/23 → …
Project: Research
-
ECS: Robust and Lightweight Security Framework for Low-power Wide-area Network
XU, W. (Principal Investigator / Project Coordinator)
1/01/21 → 27/12/24
Project: Research