Abstract
Weakly-supervised semantic segmentation (WSSS) aims to obtain pixel-wise pseudo labels from image-level labels for segmentation supervision. However, due to the co-occurrence of multiple categories in an image, it is difficult to obtain accurate pseudo labels for supervision, leading to the unsatisfactory performances of current methods. In this paper, we observe that accurate pseudo labels are easier to obtain from images with only a single semantic object (i.e., single-label images) compared to those with multiple semantic objects (i.e., multi-label images). This inspires us to treat the localization maps from single-label images (referred to as the source domain) as good prior knowledge and transfer to multi-label images (referred to as the target domain). Specifically, we present a cross-domain semantic decoupling (CSD) method that first splits image data into source and target domains, and then utilizes the co-occurrence oriented copy-and-paste scheme to enforce pixel-wise consistency and regularize the network responses to the same objects in the two domains. Such a design reduces semantic ambiguity and generates more accurate class boundaries for the pseudo labels. Our method can be seamlessly incorporated into existing WSSS models. Extensive experiments on PASCAL-VOC 2012 demonstrate that the proposed CSD can significantly improve the quality of pseudo labels and final segmentation results. © 2023. The copyright of this document resides with its authors
| Original language | English |
|---|---|
| Title of host publication | The 34th British Machine Vision Conference Proceedings |
| Number of pages | 12 |
| Publication status | Published - Nov 2023 |
| Event | The 34th British Machine Vision Conference - Duration: 20 Nov 2023 → 24 Nov 2023 |
Conference
| Conference | The 34th British Machine Vision Conference |
|---|---|
| Period | 20/11/23 → 24/11/23 |
Fingerprint
Dive into the research topics of 'Cross-domain Semantic Decoupling for Weakly-Supervised Semantic Segmentation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver