Skip to main navigation Skip to search Skip to main content

Spatially-guided Temporal Aggregation for Robust Event-RGB Optical Flow Estimation

Qianang Zhou, Junhui Hou*, Meiyi Yang, Yongjian Deng, Youfu Li, Junlin Xiong

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Current optical flow methods exploit the stable appearance of frame (or RGB) data to establish robust correspondences across time. Event cameras, on the other hand, provide high-temporal-resolution motion cues and excel in challenging scenarios. These complementary characteristics underscore the potential of integrating frame and event data for optical flow estimation. However, most cross-modal approaches fail to fully utilize the complementary advantages, relying instead on simply stacking information. This study introduces a novel approach that uses a spatially dense modality to guide the aggregation of the temporally dense event modality, achieving effective crossmodal fusion. Specifically, we propose an event-enhanced frame representation that preserves the rich texture of frames and the basic structure of events. We use the enhanced representation as the guiding modality and employ events to capture temporally dense motion information. The robust motion features derived from the guiding modality direct the aggregation of motion information from events. To further enhance fusion, we propose a transformer-based module that complements sparse event motion features with spatially rich frame information and enhances global information propagation. Additionally, a mix-fusion encoder is designed to extract comprehensive spatiotemporal contextual features from both modalities. Extensive experiments on the MVSEC and DSEC-Flow datasets demonstrate the effectiveness of our framework. Leveraging the complementary strengths of frames and events, our method achieves leading performance on the DSEC-Flow dataset. Compared to the event-only model, frame guidance improves accuracy by 10%. Furthermore, it outperforms the state-of-the-art fusion-based method with a 4% accuracy gain and a 45% reduction in inference time. The code is publicly available at https://github.com/ZhouQianang/STFlow.
Original languageEnglish
Pages (from-to)1-11
JournalIEEE Transactions on Multimedia
DOIs
Publication statusOnline published - 16 Feb 2026

Research Keywords

  • modal fusion
  • optical flow
  • event-based vision

Fingerprint

Dive into the research topics of 'Spatially-guided Temporal Aggregation for Robust Event-RGB Optical Flow Estimation'. Together they form a unique fingerprint.

Cite this