Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams

Bochen Xie, Yongjian Deng, Zhanpeng Shao, Qingsong Xu, Youfu Li*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

11 Citations (Scopus)

Abstract

Event cameras are neuromorphic vision sensors that record a scene as sparse and asynchronous event streams. Most event-based methods project events into dense frames and process them using conventional vision models, resulting in high computational complexity. A recent trend is to develop point-based networks that achieve efficient event processing by learning sparse representations. However, existing works may lack robust local information aggregators and effective feature interaction operations, thus limiting their modeling capabilities. To this end, we propose an attention-aware model named Event Voxel Set Transformer (EVSTr) for efficient spatiotemporal representation learning on event streams. It first converts the event stream into voxel sets and then hierarchically aggregates voxel features to obtain robust representations. The core of EVSTr is an event voxel transformer encoder that consists of two well-designed components, including the Multi-Scale Neighbor Embedding Layer (MNEL) for local information aggregation and the Voxel Self-Attention Layer (VSAL) for global feature interaction. Enabling the network to incorporate a long-range temporal structure, we introduce a segment modeling strategy (S²TM) to learn motion patterns from a sequence of segmented voxel sets. The proposed model is evaluated on two recognition tasks, including object classification and action recognition. To provide a convincing model evaluation, we present a new event-based action recognition dataset (NeuroHAR) recorded in challenging scenarios. Comprehensive experiments show that EVSTr achieves state-of-the-art performance while maintaining low model complexity. © 2024 IEEE.
Original languageEnglish
Pages (from-to)13427-13440
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume34
Issue number12
Online published23 Aug 2024
DOIs
Publication statusPublished - Dec 2024

Funding

This work was supported in part by the Research Grants Council of Hong Kong under Grant CityU11213420 and Grant CityU11206122, and in part by the National Natural Science Foundation of China under Grant 62173286, Grant 62203024, and Grant 61976191.

Research Keywords

  • Event camera
  • Neuromorphic vision
  • Attention mechanism
  • Object classification
  • Action recognition

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Event Voxel Set Transformer for Spatiotemporal Representation Learning on Event Streams'. Together they form a unique fingerprint.

Cite this