FDNeRF : Few-shot Dynamic Neural Radiance Fields for Face Reconstruction and Expression Editing

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

11 Scopus Citations
View graph of relations

Detail(s)

Original languageEnglish
Title of host publicationProceedings - SIGGRAPH Asia 2022
Subtitle of host publicationConference Papers Proceedings
EditorsSoon Ki Jung, Jehee Lee, Adam Bargteil, Stephen N. Spencer
Place of PublicationNew York
PublisherAssociation for Computing Machinery, Inc
ISBN (print)9781450394703
Publication statusPublished - 2022

Publication series

NameProceedings - SIGGRAPH Asia Conference Papers

Conference

Title15th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH Asia 2022)
LocationDaegu Exhibition & Convention Center (EXCO)
PlaceKorea, Republic of
CityDaegu
Period6 - 9 December 2022

Abstract

We propose a Few-shot Dynamic Neural Radiance Field (FDNeRF), the first NeRF-based method capable of reconstruction and expression editing of 3D faces based on a small number of dynamic images. Unlike existing dynamic NeRFs that require dense images as input and can only be modeled for a single identity, our method enables face reconstruction across different persons with few-shot inputs. Compared to state-of-the-art few-shot NeRFs designed for modeling static scenes, the proposed FDNeRF accepts view-inconsistent dynamic inputs and supports arbitrary facial expression editing, i.e., producing faces with novel expressions beyond the input ones. To handle the inconsistencies between dynamic inputs, we introduce a well-designed conditional feature warping (CFW) module to perform expression conditioned warping in 2D feature space, which is also identity adaptive and 3D constrained. As a result, features of different expressions are transformed into the target ones. We then construct a radiance field based on these view-consistent features and use volumetric rendering to synthesize novel views of the modeled faces. Extensive experiments with quantitative and qualitative evaluation demonstrate that our method outperforms existing dynamic and few-shot NeRFs on both 3D face reconstruction and expression editing tasks. Code is available at https://fdnerf.github.io. © 2022 ACM.

Research Area(s)

  • 3D face reconstruction, expression editing, few-shot and dynamic modeling, NeRF

Bibliographic Note

Publisher Copyright: © 2022 ACM.

Citation Format(s)

FDNeRF: Few-shot Dynamic Neural Radiance Fields for Face Reconstruction and Expression Editing. / Zhang, Jingbo; Li, Xiaoyu; Wan, Ziyu et al.
Proceedings - SIGGRAPH Asia 2022: Conference Papers Proceedings. ed. / Soon Ki Jung; Jehee Lee; Adam Bargteil; Stephen N. Spencer. New York: Association for Computing Machinery, Inc, 2022. 12 (Proceedings - SIGGRAPH Asia Conference Papers).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review