Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Journal / Publication | IEEE Transactions on Circuits and Systems for Video Technology |
Publication status | Online published - 23 Aug 2024 |
Link(s)
Abstract
Event cameras are neuromorphic vision sensors that record a scene as sparse and asynchronous event streams. Most event-based methods project events into dense frames and process them using conventional vision models, resulting in high computational complexity. A recent trend is to develop point-based networks that achieve efficient event processing by learning sparse representations. However, existing works may lack robust local information aggregators and effective feature interaction operations, thus limiting their modeling capabilities. To this end, we propose an attention-aware model named Event Voxel Set Transformer (EVSTr) for efficient spatiotemporal representation learning on event streams. It first converts the event stream into voxel sets and then hierarchically aggregates voxel features to obtain robust representations. The core of EVSTr is an event voxel transformer encoder that consists of two well-designed components, including the Multi-Scale Neighbor Embedding Layer (MNEL) for local information aggregation and the Voxel Self-Attention Layer (VSAL) for global feature interaction. Enabling the network to incorporate a long-range temporal structure, we introduce a segment modeling strategy (S²TM) to learn motion patterns from a sequence of segmented voxel sets. The proposed model is evaluated on two recognition tasks, including object classification and action recognition. To provide a convincing model evaluation, we present a new event-based action recognition dataset (NeuroHAR) recorded in challenging scenarios. Comprehensive experiments show that EVSTr achieves state-of-the-art performance while maintaining low model complexity. © 2024 IEEE.
Research Area(s)
- Event camera, Neuromorphic vision, Attention mechanism, Object classification, Action recognition
Citation Format(s)
Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams. / Xie, Bochen; Deng, Yongjian; Shao, Zhanpeng et al.
In: IEEE Transactions on Circuits and Systems for Video Technology, 23.08.2024.
In: IEEE Transactions on Circuits and Systems for Video Technology, 23.08.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review