TY - JOUR
T1 - Expanding Sparse LiDAR Depth and Guiding Stereo Matching for Robust Dense Depth Estimation
AU - Xu, Zhenyu
AU - Li, Yuehua
AU - Zhu, Shiqiang
AU - Sun, Yuxiang
PY - 2023/3
Y1 - 2023/3
N2 - Dense depth estimation is an important task for applications, such as object detection, 3-D reconstruction, etc. Stereo matching, as a popular method for dense depth estimation, has been faced with challenges when low textures, occlusions or domain gaps exist. Stereo-LiDAR fusion has recently become a promising way to deal with these challenges. However, due to the sparsity and uneven distribution of the LiDAR depth data, existing stereo-LiDAR fusion methods tend to ignore the data when their density is quite low or they largely differ from the depth predicted from stereo images. To provide a solution to this problem, we propose a stereo-LiDAR fusion method by first expanding the sparse LiDAR depth to a semi-dense depth with RGB image as reference. Then, based on the semi-dense depth, a varying-weight Gaussian guiding method is proposed to deal with the varying reliability of guiding signals. A multi-scale feature extraction and fusion method is further used to enhance the network, which shows superior performance over traditional sparse invariant convolution methods. Experimental results on different public datasets demonstrate our superior accuracy and robustness over the state of the arts. © 2023 IEEE.
AB - Dense depth estimation is an important task for applications, such as object detection, 3-D reconstruction, etc. Stereo matching, as a popular method for dense depth estimation, has been faced with challenges when low textures, occlusions or domain gaps exist. Stereo-LiDAR fusion has recently become a promising way to deal with these challenges. However, due to the sparsity and uneven distribution of the LiDAR depth data, existing stereo-LiDAR fusion methods tend to ignore the data when their density is quite low or they largely differ from the depth predicted from stereo images. To provide a solution to this problem, we propose a stereo-LiDAR fusion method by first expanding the sparse LiDAR depth to a semi-dense depth with RGB image as reference. Then, based on the semi-dense depth, a varying-weight Gaussian guiding method is proposed to deal with the varying reliability of guiding signals. A multi-scale feature extraction and fusion method is further used to enhance the network, which shows superior performance over traditional sparse invariant convolution methods. Experimental results on different public datasets demonstrate our superior accuracy and robustness over the state of the arts. © 2023 IEEE.
KW - AI-based methods
KW - Computer vision for automation
KW - sensor fusion
UR - http://www.scopus.com/inward/record.url?scp=85148300135&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85148300135&origin=recordpage
U2 - 10.1109/LRA.2023.3240093
DO - 10.1109/LRA.2023.3240093
M3 - RGC 21 - Publication in refereed journal
SN - 2377-3766
VL - 8
SP - 1479
EP - 1486
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 3
ER -