Learning Neural Duplex Radiance Fields for Real-Time View Synthesis

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

4 Scopus Citations
View graph of relations

Author(s)

  • Christian Richardt
  • Aljaž Božič
  • Chao Li
  • Vijay Rengarajan
  • Seonghyeon Nam
  • Xiaoyu Xiang
  • Tuotuo Li
  • Bo Zhu
  • Rakesh Ranjan

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition
PublisherInstitute of Electrical and Electronics Engineers, Inc.
Pages8307-8316
Number of pages10
ISBN (electronic)9798350301298
ISBN (print)979-8-3503-0130-4
Publication statusPublished - 2023

Publication series

NameProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
ISSN (Print)1063-6919
ISSN (electronic)2575-7075

Conference

Title2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023)
LocationVancouver Convention Center
PlaceCanada
CityVancouver
Period18 - 22 June 2023

Abstract

Neural radiance fields (NeRFs) enable novel-view synthesis with unprecedented visual quality. However, to render photorealistic images, NeRFs require hundreds of deep multilayer perceptron (MLP) evaluations - for each pixel. This is prohibitively expensive and makes realtime rendering infeasible, even on powerful modern GPUs. In this paper, we propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations that are fully compatible with the massively parallel graphics rendering pipeline. We represent scenes as neural radiance features encoded on a two-layer duplex mesh, which effectively over-comes the inherent inaccuracies in 3D surface reconstruction by learning the aggregated radiance information from a reliable interval of ray-surface intersections. To exploit local geometric relationships of nearby pixels, we leverage screen-space convolutions instead of the MLPs used in NeRFs to achieve high-quality appearance. Finally, the performance of the whole framework is further boosted by a novel multi-view distillation optimization strategy. We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets. © 2023 IEEE.

Research Area(s)

  • 3D from multi-view and sensors

Bibliographic Note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Citation Format(s)

Learning Neural Duplex Radiance Fields for Real-Time View Synthesis. / Wan, Ziyu; Richardt, Christian; Božič, Aljaž et al.
Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Institute of Electrical and Electronics Engineers, Inc., 2023. p. 8307-8316 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review