GLENet : Boosting 3D Object Detectors with Generative Label Uncertainty Estimation
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 3332–3352 |
Journal / Publication | International Journal of Computer Vision |
Volume | 131 |
Issue number | 12 |
Online published | 15 Aug 2023 |
Publication status | Published - Dec 2023 |
Link(s)
Abstract
The inherent ambiguity in ground-truth annotations of 3D bounding boxes, caused by occlusions, signal missing, or manual annotation errors, can confuse deep 3D object detectors during training, thus deteriorating detection accuracy. However, existing methods overlook such issues to some extent and treat the labels ass deterministic. In this paper, we formulate the label uncertainty problem as the diversity of potentially plausible bounding boxes of objects. Then, we propose GLENet, a generative framework adapted from conditional variational autoencoders, to model the one-to-many relationship between a typical 3D object and its potential ground-truth bounding boxes with latent variables. The label uncertainty generated by GLENet is a plug-and-play module and can be conveniently integrated into existing deep 3D detectors to build probabilistic detectors and supervise the learning of the localization uncertainty. Besides, we propose an uncertainty-aware quality estimator architecture in probabilistic detectors to guide the training of the IoU-branch with predicted localization uncertainty. We incorporate the proposed methods into various popular base 3D detectors and demonstrate significant and consistent performance gains on both KITTI and Waymo benchmark datasets. Especially, the proposed GLENet-VR outperforms all published LiDAR-based approaches by a large margin and achieves the top rank among single-modal methods on the challenging KITTI test set. The source code and pre-trained models are publicly available at https://github.com/Eaphan/GLENet. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023.
Research Area(s)
- 3D object detection, Label uncertainty, Conditional variational autoencoders, Probabilistic object detection, 3D point cloud
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
GLENet: Boosting 3D Object Detectors with Generative Label Uncertainty Estimation. / Zhang, Yifan; Zhang, Qijian; Zhu, Zhiyu et al.
In: International Journal of Computer Vision, Vol. 131, No. 12, 12.2023, p. 3332–3352.
In: International Journal of Computer Vision, Vol. 131, No. 12, 12.2023, p. 3332–3352.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review