Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings - 2023 IEEE/CVF International Conference on Computer Vision (ICCV 2023) |
Publisher | Institute of Electrical and Electronics Engineers, Inc. |
Pages | 21988-21998 |
ISBN (electronic) | 979-8-3503-0718-4 |
Publication status | Published - Oct 2023 |
Conference
Title | IEEE International Conference on Computer Vision 2023 (ICCV 2023) |
---|---|
Location | Paris Convention Center |
Place | France |
City | Paris |
Period | 2 - 6 October 2023 |
Link(s)
Abstract
This paper addresses the problem of cross-modal object tracking from RGB videos and event data. Rather than constructing a complex cross-modal fusion network, we explore the great potential of a pre-trained vision Transformer (ViT). Particularly, we delicately investigate plug-and-play training augmentations that encourage the ViT to bridge the vast distribution gap between the two modalities, enabling comprehensive cross-modal information interaction and thus enhancing its ability. Specifically, we propose a mask modeling strategy that randomly masks a specific modality of some tokens to enforce the interaction between tokens from different modalities interacting proactively. To mitigate network oscillations resulting from the masking strategy and further amplify its positive effect, we then theoretically propose an orthogonal high-rank loss to regularize the attention matrix. Extensive experiments demonstrate that our plug-and-play training augmentation techniques can significantly boost state-of-the-art one-stream and two-stream trackers to a large extent in terms of both tracking precision and success rate. Our new perspective and findings will potentially bring insights to the field of leveraging powerful pre-trained ViTs to model cross-modal data. The code is publicly available at https://github.com/ZHU-Zhiyu/High-Rank_RGB-Event_Tracker.
© 2023 IEEE
© 2023 IEEE
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers. / Zhu, Zhiyu; Hou, Junhui; Wu, Dapeng Oliver.
Proceedings - 2023 IEEE/CVF International Conference on Computer Vision (ICCV 2023). Institute of Electrical and Electronics Engineers, Inc., 2023. p. 21988-21998.
Proceedings - 2023 IEEE/CVF International Conference on Computer Vision (ICCV 2023). Institute of Electrical and Electronics Engineers, Inc., 2023. p. 21988-21998.
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review