Dynamic Fusion Module Evolves Drivable Area and Road Anomaly Detection : A Benchmark and Algorithms

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

83 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)10750-10760
Journal / PublicationIEEE Transactions on Cybernetics
Volume52
Issue number10
Online published24 Mar 2021
Publication statusPublished - Oct 2022
Externally publishedYes

Abstract

Joint detection of drivable areas and road anomalies is very important for mobile robots. Recently, many semantic segmentation approaches based on convolutional neural networks (CNNs) have been proposed for pixelwise drivable area and road anomaly detection. In addition, some benchmark datasets, such as KITTI and Cityscapes, have been widely used. However, the existing benchmarks are mostly designed for self-driving cars. There lacks a benchmark for ground mobile robots, such as robotic wheelchairs. Therefore, in this article, we first build a drivable area and road anomaly detection benchmark for ground mobile robots, evaluating existing state-of-the-art (SOTA) single-modal and data-fusion semantic segmentation CNNs using six modalities of visual features. Furthermore, we propose a novel module, referred to as the dynamic fusion module (DFM), which can be easily deployed in existing data-fusion networks to fuse different types of visual features effectively and efficiently. The experimental results show that the transformed disparity image is the most informative visual feature and the proposed DFM-RTFNet outperforms the SOTAs. In addition, our DFM-RTFNet achieves competitive performance on the KITTI road benchmark. © 2021 IEEE.

Research Area(s)

  • Deep learning in robotics and automation, dynamic fusion, mobile robots, semantic scene understanding