Skip to main navigation Skip to search Skip to main content

Deep Image Matting with Sparse User Interactions

Tianyi Wei, Dongdong Chen, Wenbo Zhou, Jing Liao, Hanqing Zhao, Weiming Zhang, Gang Hua, Nenghai Yu

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Image matting is a fundamental and challenging problem in computer vision and graphics. Most existing matting methods leverage a user-supplied trimap as an auxiliary input to produce good alpha matte. However, obtaining high-quality trimap itself is arduous. Recently, some hint-free methods have emerged, however, the matting quality is still far behind the trimap-based methods. The main reason is that, some hints for removing semantic ambiguity and improving matting quality are essential. Apparently, there is a trade-off between interaction cost and matting quality. To balance performance and user-friendliness, we propose an improved deep image matting framework which is trimap-free and only needs sparse user click or scribble interaction to minimize the needed auxiliary constraints while still allowing interactivity. Moreover, we introduce uncertainty estimation that predicts which parts need polishing and conduct uncertainty-guided refinement. To trade off runtime against refinement quality, users can also choose different refinement modes. Experimental results show that our method performs better than existing trimap-free methods and comparably to state-of-the-art trimap-based methods with minimal user effort. Finally, we demonstrate the extensibility of our framework to video human matting without any structure modification, by adding optical flow-based sparse hint propagation and temporal consistency regularization imposed on the single frame. © 2023 IEEE.
Original languageEnglish
Pages (from-to)881-895
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume46
Issue number2
Online published23 Oct 2023
DOIs
Publication statusPublished - Feb 2024

Bibliographical note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Funding

This work was supported in part by the Natural Science Foundation of China under Grants 62372423, U20B2047, 62072421, 62002334, and 62121002, in part by Key Research and Development program of Anhui Province under Grant 2022k07020008, in part by the Fundamental Research Funds for the Central Universities under Grant WK5290000003, and in part by the Research Grants Council (RGC) of the Hong Kong Special Administrative Region, under a GRF Grant CityU 11216122.

Research Keywords

  • Estimation
  • Image Matting
  • Image segmentation
  • Runtime
  • Semantics
  • Sparse Interactions
  • Task analysis
  • Training
  • Uncertainty
  • Uncertainty Estimation
  • Video Human Matting

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Deep Image Matting with Sparse User Interactions'. Together they form a unique fingerprint.

Cite this