InconSeg: Residual-Guided Fusion With Inconsistent Multi-Modal Data for Negative and Positive Road Obstacles Segmentation

Zhen Feng, Yanning Guo, David Navarro-Alarcon, Yueyong Lyu, Yuxiang Sun*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

7 Citations (Scopus)

Abstract

Segmentation of road obstacles, including negative and positive obstacles, is critical to the safe navigation of autonomous vehicles. Recent methods have witnessed an increasing interest in using multi-modal data fusion (e.g., RGB and depth/disparity images). Although improved segmentation accuracy has been achieved by these methods, we still find that their performance could be easily degraded if the two modalities have inconsistent information, for example, distant obstacles that can be viewed in RGB images but cannot be viewed in depth/disparity images. To address this issue, we propose a novel two-encoder-two-decoder RGB-depth/disparity multi-modal network with Residual-Guided Fusion modules. Different from most existing networks that fuse feature maps in encoders, we fuse feature maps in decoder. We also release a large-scale RGB-depth/disparity dataset recorded in both urban and rural environments with manually-labeled ground truth for both negative- and positive-obstacles segmentation. Extensive experimental results demonstrate that our network achieves state-of-the-art performance compared with other networks. © 2023 IEEE.
Original languageEnglish
Pages (from-to)4871-4878
JournalIEEE Robotics and Automation Letters
Volume8
Issue number8
Online published2 May 2023
DOIs
Publication statusPublished - Aug 2023
Externally publishedYes

Research Keywords

  • autonomous vehicles
  • multi-modal fusion
  • Negative obstacles
  • road obstacles
  • semantic segmentation

Fingerprint

Dive into the research topics of 'InconSeg: Residual-Guided Fusion With Inconsistent Multi-Modal Data for Negative and Positive Road Obstacles Segmentation'. Together they form a unique fingerprint.

Cite this