Progressive Point Cloud Upsampling via Differentiable Rendering

Pingping Zhang, Xu Wang*, Lin Ma, Shiqi Wang, Sam Kwong, Jianmin Jiang

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

36 Citations (Scopus)

Abstract

In this paper, we propose one novel progressive point cloud upsampling framework to tackle the non-uniform distribution issue during the point cloud upsampling process. Specifically, we design an Up-UNet feature expansion module which is capable of learning the local and global point features via a down-feature operator and an up-feature operator, respectively, to alleviate the non-uniform distribution issue and remove the outliers. Moreover, we design a hybrid loss function considering both the multi-scale reconstruction loss and the rendering loss. The multi-scale reconstruction loss enables each upsampling module to generate a denser point cloud, while the rendering loss via point-based differentiable rendering ensures that the proposed model preserves the point cloud structures. Extensive experimental results demonstrate that our proposed model achieves state-of-the-art performance in terms of both qualitative and quantitative evaluations.
Original languageEnglish
Pages (from-to)4673-4685
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume31
Issue number12
Online published26 Jul 2021
DOIs
Publication statusPublished - Dec 2021

Research Keywords

  • feature expansion unit
  • Geometry
  • Image reconstruction
  • Point cloud upsampling
  • point-based differential rendering
  • Rendering (computer graphics)
  • Shape
  • Surface reconstruction
  • Task analysis
  • Three-dimensional displays

Fingerprint

Dive into the research topics of 'Progressive Point Cloud Upsampling via Differentiable Rendering'. Together they form a unique fingerprint.

Cite this