Skip to main navigation Skip to search Skip to main content

Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields

Haoyuan Wang, Wenbo Hu*, Lei Zhu, Rynson W.H. Lau*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

Inverse rendering aims at recovering both geometry and materials of objects. It provides a more compatible reconstruction for conventional rendering engines, compared with the neural radiance fields (NeRFs). On the other hand, existing NeRF-based inverse rendering methods cannot handle glossy objects with local light interactions well, as they typically oversimplify the illumination as a 2D environmental map, which assumes infinite lights only. Observing the superiority of NeRFs in recovering radiance fields, we propose a novel 5D Neural Plenoptic Function (NeP) based on NeRFs and ray tracing, such that more accurate lighting-object interactions can be formulated via the rendering equation. We also design a material-aware cone sampling strategy to efficiently integrate lights inside the BRDF lobes with the help of pre-filtered radiance fields. Our method has two stages: the geometry of the target object and the pre-filtered environmental radiance fields are reconstructed in the first stage, and materials of the target object are estimated in the second stage with the proposed NeP and material-aware cone sampling strategy. Extensive experiments on the proposed real-world and synthetic datasets demonstrate that our method can reconstruct high-fidelity geometry/materials of challenging glossy objects with complex lighting interactions from nearby objects. Project webpage: https://whyy.site/paper/nep.

© 2024 IEEE
Original languageEnglish
Title of host publicationProceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024)
PublisherIEEE
Pages19999-20008
ISBN (Electronic)979-8-3503-5300-6
DOIs
Publication statusPublished - 2024
Event2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024)
- Seattle Convention Center, Seattle, United States
Duration: 17 Jun 202421 Jun 2024
https://cvpr.thecvf.com/Conferences/2024
https://ieeexplore.ieee.org/xpl/conhome/1000147/all-proceedings
https://cvpr.thecvf.com/virtual/2024/index.html

Conference

Conference2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2024)
PlaceUnited States
CitySeattle
Period17/06/2421/06/24
Internet address

Bibliographical note

Research Unit(s) information for this publication is provided by the author(s) concerned.

Funding

This work is partly supported by a GRF grant from the Research Grants Council of Hong Kong (Ref.: 11205620).

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields'. Together they form a unique fingerprint.
  • GRF: Learning to Predict Scene Contexts

    LAU, R. W. H. (Principal Investigator / Project Coordinator), FU, H. (Co-Investigator) & FU, C. W. (Co-Investigator)

    1/01/2112/06/25

    Project: Research

Cite this