Projects per year
Abstract
Paired RGB and depth images are becoming popular multi-modal data adopted in computer vision tasks. Traditional methods based on Convolutional Neural Networks (CNNs) typically fuse RGB and depth by combining their deep representations in a late stage with only one path, which can be ambiguous and insufficient for fusing large amounts of cross-modal data. To address this issue, we propose a novel multi-scale multi-path fusion network with cross-modal interactions (MMCI), in which the traditional two-stream fusion architecture with single fusion path is advanced by diversifying the fusion path to a global reasoning one and another local capturing one and meanwhile introducing cross-modal interactions in multiple layers. Compared to traditional two-stream architectures, the MMCI net is able to supply more adaptive and flexible fusion flows, thus easing the optimization and enabling sufficient and efficient fusion. Concurrently, the MMCI net is equipped with multi-scale perception ability (i.e., simultaneously global and local contextual reasoning). We take RGB-D saliency detection as an example task. Extensive experiments on three benchmark datasets show the improvement of the proposed MMCI net over other state-of-the-art methods.
| Original language | English |
|---|---|
| Pages (from-to) | 376-385 |
| Journal | Pattern Recognition |
| Volume | 86 |
| Online published | 13 Aug 2018 |
| DOIs | |
| Publication status | Published - Feb 2019 |
Research Keywords
- Convolutional neural networks
- Multi-path
- RGB-D
- Saliency detection
RGC Funding Information
- RGC-funded
Fingerprint
Dive into the research topics of 'Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for RGB-D salient object detection'. Together they form a unique fingerprint.Projects
- 1 Finished
-
GRF: A Novel Infrared Sensing Method for Enhanced Motion Detection and Tracking
LI, Y. F. (Principal Investigator / Project Coordinator)
1/01/16 → 29/06/20
Project: Research