Learning Neural Duplex Radiance Fields for Real-Time View Synthesis
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
Publisher | Institute of Electrical and Electronics Engineers, Inc. |
Pages | 8307-8316 |
Number of pages | 10 |
ISBN (electronic) | 9798350301298 |
ISBN (print) | 979-8-3503-0130-4 |
Publication status | Published - 2023 |
Publication series
Name | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
---|---|
ISSN (Print) | 1063-6919 |
ISSN (electronic) | 2575-7075 |
Conference
Title | 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023) |
---|---|
Location | Vancouver Convention Center |
Place | Canada |
City | Vancouver |
Period | 18 - 22 June 2023 |
Link(s)
Abstract
Neural radiance fields (NeRFs) enable novel-view synthesis with unprecedented visual quality. However, to render photorealistic images, NeRFs require hundreds of deep multilayer perceptron (MLP) evaluations - for each pixel. This is prohibitively expensive and makes realtime rendering infeasible, even on powerful modern GPUs. In this paper, we propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations that are fully compatible with the massively parallel graphics rendering pipeline. We represent scenes as neural radiance features encoded on a two-layer duplex mesh, which effectively over-comes the inherent inaccuracies in 3D surface reconstruction by learning the aggregated radiance information from a reliable interval of ray-surface intersections. To exploit local geometric relationships of nearby pixels, we leverage screen-space convolutions instead of the MLPs used in NeRFs to achieve high-quality appearance. Finally, the performance of the whole framework is further boosted by a novel multi-view distillation optimization strategy. We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets. © 2023 IEEE.
Research Area(s)
- 3D from multi-view and sensors
Bibliographic Note
Research Unit(s) information for this publication is provided by the author(s) concerned.
Citation Format(s)
Learning Neural Duplex Radiance Fields for Real-Time View Synthesis. / Wan, Ziyu; Richardt, Christian; Božič, Aljaž et al.
Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Institute of Electrical and Electronics Engineers, Inc., 2023. p. 8307-8316 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).
Proceedings - 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Institute of Electrical and Electronics Engineers, Inc., 2023. p. 8307-8316 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review