Skip to main navigation Skip to search Skip to main content

CoDynTrust: Robust Asynchronous Collaborative Perception via Dynamic Feature Trust Modulus

Yunjiang Xu, Lingzhi Li*, Jin Wang*, Benyuan Yang, Zhiwen Wu, Xinhong Chen, Jianping Wang

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

Collaborative perception, fusing information from multiple agents, can extend perception range so as to improve perception performance. However, temporal asynchrony in real-world environments, caused by communication delays, clock misalignment, or sampling configuration differences, can lead to information mismatches. If this is not well handled, then the collaborative performance is patchy, and what's worse safety accidents may occur. To tackle this challenge, we propose CoDynTrust, an uncertainty-encoded asynchronous fusion perception framework that is robust to the information mismatches caused by temporal asynchrony. CoDynTrust generates dynamic feature trust modulus (DFTM) for each region of interest by modeling aleatoric and epistemic uncertainty as well as selectively suppressing or retaining single-vehicle features, thereby mitigating information mismatches. We then design a multi-scale fusion module to handle multi-scale feature maps processed by DFTM. Compared to existing works that also consider asynchronous collaborative perception, CoDynTrust combats various low-quality information in temporally asynchronous scenarios and allows uncertainty to be propagated to downstream tasks such as planning and control. Experimental results demonstrate that CoDynTrust significantly reduces performance degradation caused by temporal asynchrony across multiple datasets, achieving state-of-the-art detection performance even with temporal asynchrony. The code is available at https://github.com/CrazyShout/CoDynTrust. © 2025 IEEE.
Original languageEnglish
Title of host publication2025 IEEE International Conference on Robotics and Automation (ICRA)
PublisherIEEE
Pages336-342
Number of pages7
ISBN (Electronic)9798331541392
ISBN (Print)9798331541408
DOIs
Publication statusPublished - 2025
Event2025 IEEE International Conference on Robotics and Automation (ICRA 2025) - Georgia World Congress Center, Atlanta, United States
Duration: 19 May 202523 May 2025

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729

Conference

Conference2025 IEEE International Conference on Robotics and Automation (ICRA 2025)
Abbreviated titleICRA 2025
PlaceUnited States
CityAtlanta
Period19/05/2523/05/25

Funding

This work was supported in part by the National Natural Science Foundation of China (62072321), the Six Talent Peak Project of Jiangsu Province (XYDXX-084), the Science and Technology Program of Jiangsu Province (BZ2024062), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (22KJA520007), Suzhou Planning Project of Science and Technology (2023ss03), Hong Kong Research Grant Council Under GRF (11210622).

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'CoDynTrust: Robust Asynchronous Collaborative Perception via Dynamic Feature Trust Modulus'. Together they form a unique fingerprint.

Cite this