Semantic-oriented Labeled-to-unlabeled Distribution Translation for Image Segmentation
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 434-445 |
Journal / Publication | IEEE Transactions on Medical Imaging |
Volume | 41 |
Issue number | 2 |
Online published | 20 Sep 2021 |
Publication status | Published - Feb 2022 |
Link(s)
Abstract
Automatic medical image segmentation plays a crucial role in many medical applications, such as disease diagnosis and treatment planning. Existing deep learning based models usually regarded the segmentation task as pixel-wise classification and neglected the semantic correlations of pixels across different images, leading to vague feature distribution. Moreover, pixel-wise annotated data is rare in medical domain, and the scarce annotated data usually exhibits the biased distribution against the desired one, hindering the performance improvement under the supervised learning setting. In this paper, we propose a novel Labeled-to-unlabeled Distribution Translation (L2uDT) framework with Semantic-oriented Contrastive Learning (SoCL), mainly for addressing the aforementioned issues in medical image segmentation. In SoCL, a semantic grouping module is designed to cluster pixels into a set of semantically coherent groups, and a semantic-oriented contrastive loss is advanced to constrain group-wise prototypes, so as to explicitly learn a feature space with intra-class compactness and inter-class separability. We then establish a L2uDT strategy to approximate the desired data distribution for unbiased optimization, where we translate the labeled data distribution with the guidance of extensive unlabeled data. In particular, a bias estimator is devised to measure the distribution bias, then a gradual-paced shift is derived to progressively translate the labeled data distribution to unlabeled one. Both labeled and translated data are leveraged to optimize the segmentation model simultaneously. We illustrate the effectiveness of the proposed method on two benchmark datasets, EndoScene and PROSTATEx, and our method achieves state-of-the-art performance, which clearly demonstrates its effectiveness for medical image segmentation. The source code is available at https://github.com/CityU-AIM-Group/L2uDT.
Research Area(s)
- Data models, Feature extraction, few sample segmentation, Image segmentation, labeled-to-unlabeled distribution translation, Semantic-oriented contrastive learning, Semantics, Semisupervised learning, Task analysis, Three-dimensional displays
Citation Format(s)
Semantic-oriented Labeled-to-unlabeled Distribution Translation for Image Segmentation. / Guo, Xiaoqing; Liu, Jie; Yuan, Yixuan.
In: IEEE Transactions on Medical Imaging, Vol. 41, No. 2, 02.2022, p. 434-445.Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review