Transferring and Regularizing Prediction for Semantic Segmentation
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 |
Subtitle of host publication | Proceedings |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 9618-9627 |
ISBN (electronic) | 978-1-7281-7168-5 |
ISBN (print) | 978-1-7281-7169-2 |
Publication status | Published - Jun 2020 |
Publication series
Name | IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR |
---|---|
Publisher | Institute of Electrical and Electronics Engineers |
ISSN (Print) | 1063-6919 |
ISSN (electronic) | 2575-7075 |
Conference
Title | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020) |
---|---|
Location | Virtual |
Place | United States |
City | Seattle |
Period | 13 - 19 June 2020 |
Link(s)
Abstract
Semantic segmentation often requires a large set of images with pixel-level annotations. In the view of extremely expensive expert labeling, recent research has shown that the models trained on photo-realistic synthetic data (e.g., computer games) with computer-generated annotations can be adapted to real images. Despite this progress, without constraining the prediction on real images, the models will easily overfit on synthetic data due to severe domain mismatch. In this paper, we novelly exploit the intrinsic properties of semantic segmentation to alleviate such problem for model transfer. Specifically, we present a Regularizer of Prediction Transfer (RPT) that imposes the intrinsic properties as constraints to regularize model transfer in an unsupervised fashion. These constraints include patch-level, cluster-level and context-level semantic prediction consistencies at different levels of image formation. As the transfer is label-free and data-driven, the robustness of prediction is addressed by selectively involving a subset of image regions for model regularization. Extensive experiments are conducted to verify the proposal of RPT on the transfer of models trained on GTA5 and SYNTHIA (synthetic data) to Cityscapes dataset (urban street scenes). RPT shows consistent improvements when injecting the constraints on several neural networks for semantic segmentation. More remarkably, when integrating RPT into the adversarial-based segmentation framework, we report to-date the best results: mIoU of 53.2%/51.7% when transferring from GTA5/SYNTHIA to Cityscapes, respectively.
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
Transferring and Regularizing Prediction for Semantic Segmentation. / Zhang, Yiheng; Qiu, Zhaofan; Yao, Ting et al.
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020: Proceedings. Institute of Electrical and Electronics Engineers, 2020. p. 9618-9627 (IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR).
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020: Proceedings. Institute of Electrical and Electronics Engineers, 2020. p. 9618-9627 (IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review