Learning A Locally Unified 3D Point Cloud for View Synthesis

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations

Detail(s)

Original languageEnglish
Pages (from-to)5610-5622
Journal / PublicationIEEE Transactions on Image Processing
Volume32
Online published9 Oct 2023
Publication statusPublished - 2023

Abstract

In this paper, we explore the problem of 3D point cloud representation-based view synthesis from a set of sparse source views. To tackle this challenging problem, we propose a new deep learning-based view synthesis paradigm that learns a locally unified 3D point cloud from source views. Specifically, we first construct sub-point clouds by projecting source views to 3D space based on their depth maps. Then, we learn the locally unified 3D point cloud by adaptively fusing points at a local neighborhood defined on the union of the sub-point clouds. Besides, we also propose a 3D geometry-guided image restoration module to fill the holes and recover high-frequency details of the rendered novel views. Experimental results on three benchmark datasets demonstrate that our method can improve the average PSNR by more than 4 dB while preserving more accurate visual details, compared with state-of-the-art view synthesis methods. The code will be publicly available at https://github.com/mengyou2/PCVS. © 2023 IEEE.

Research Area(s)

  • Image-based rendering, view synthesis, 3D point clouds, point cloud fusion, deep learning

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.