A Trinity Platform with Synthesized mmWaves for Human Motion Sensing

Project: Research

View graph of relations

Description

Sensing human motions, e.g., recognizing activities or tracking body skeletons, enables plenty of exciting and useful applications. The vision-based solutions with cameras now can achieve accurate motion sensing, hinging on the availability of rich labelled datasets and machine-learning innovation, while they are limited by line-of-sight, illumination, privacy concern, etc. This motivates emerging breakthroughs to explore wireless-based solutions for avoiding such limitations, wherein one representative example is the adoption of millimeter-waves (mmWaves). However, unlike vision, there is a scarcity of mmWave-based training datasets, because collecting and labelling such data are expensive and difficult. This project proposes a new mmWave-sensing platform to overcome this problem. Our design novelty is to leverage available vision-based datasets (for knowing the locations of body’s key points under different motions) to synthesize mmWave sensing signals that bounce off human’s body, so that synthesized signals could inherit labels from vision-based datasets directly. In this project, we show a great potential of generating such labelled synthesized data in a high quality to address the training-data scarcity issue, and further use them to enable a trinity of sensing services on this platform that can work with commercial radars directly, including 1) zero-shot activity recognition, wherein the classifier reads real mmWaves for recognition, but it is trained by synthesized data only; 2) full-body skeleton tracking with few-shot learning for real mmWaves; 3) searching radar’s optimal location in advance to facilitate deployment. Realizing our design encounters two main challenges. First, the signal synthesis is challenged by the inherent complication from the reflection and blocking for the signals by various body parts. In this project, we propose a novel software pipeline to emulate the entire procedure from the transmission to the reception of synthesized signals. Second, even mmWave signals could be sketched, there are still inevitable distinctions compared with real signals due to certain subtle differences. We further propose effective countermeasures to handle such micro-level differences. Project's significance is as follows. Given lack of training datasets, our design's ability to reduce expensive data-collecting and data-labelling overhead takes a meaningful step forward to bootstrap mmWave-system developments. With enough developments, more data could be generated in our community to further push this technique towards being mature. Moreover, our methodology digs into signal level. So, besides sensing motions, more supplementary sensing-services could also be enabled, e.g., optimizing radar’s deployment. Overall, the idea to leverage vision-based data to bootstrap mmWave-based human sensing is of essential novelty. Techniques are also innovative. 

Detail(s)

Project number9043350
Grant typeGRF
StatusActive
Effective start/end date1/09/22 → …