FDNeRF : Few-shot Dynamic Neural Radiance Fields for Face Reconstruction and Expression Editing
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings - SIGGRAPH Asia 2022 |
Subtitle of host publication | Conference Papers Proceedings |
Editors | Soon Ki Jung, Jehee Lee, Adam Bargteil, Stephen N. Spencer |
Place of Publication | New York |
Publisher | Association for Computing Machinery, Inc |
ISBN (print) | 9781450394703 |
Publication status | Published - 2022 |
Publication series
Name | Proceedings - SIGGRAPH Asia Conference Papers |
---|
Conference
Title | 15th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH Asia 2022) |
---|---|
Location | Daegu Exhibition & Convention Center (EXCO) |
Place | Korea, Republic of |
City | Daegu |
Period | 6 - 9 December 2022 |
Link(s)
Abstract
We propose a Few-shot Dynamic Neural Radiance Field (FDNeRF), the first NeRF-based method capable of reconstruction and expression editing of 3D faces based on a small number of dynamic images. Unlike existing dynamic NeRFs that require dense images as input and can only be modeled for a single identity, our method enables face reconstruction across different persons with few-shot inputs. Compared to state-of-the-art few-shot NeRFs designed for modeling static scenes, the proposed FDNeRF accepts view-inconsistent dynamic inputs and supports arbitrary facial expression editing, i.e., producing faces with novel expressions beyond the input ones. To handle the inconsistencies between dynamic inputs, we introduce a well-designed conditional feature warping (CFW) module to perform expression conditioned warping in 2D feature space, which is also identity adaptive and 3D constrained. As a result, features of different expressions are transformed into the target ones. We then construct a radiance field based on these view-consistent features and use volumetric rendering to synthesize novel views of the modeled faces. Extensive experiments with quantitative and qualitative evaluation demonstrate that our method outperforms existing dynamic and few-shot NeRFs on both 3D face reconstruction and expression editing tasks. Code is available at https://fdnerf.github.io. © 2022 ACM.
Research Area(s)
- 3D face reconstruction, expression editing, few-shot and dynamic modeling, NeRF
Bibliographic Note
Publisher Copyright:
© 2022 ACM.
Citation Format(s)
FDNeRF: Few-shot Dynamic Neural Radiance Fields for Face Reconstruction and Expression Editing. / Zhang, Jingbo; Li, Xiaoyu; Wan, Ziyu et al.
Proceedings - SIGGRAPH Asia 2022: Conference Papers Proceedings. ed. / Soon Ki Jung; Jehee Lee; Adam Bargteil; Stephen N. Spencer. New York: Association for Computing Machinery, Inc, 2022. 12 (Proceedings - SIGGRAPH Asia Conference Papers).
Proceedings - SIGGRAPH Asia 2022: Conference Papers Proceedings. ed. / Soon Ki Jung; Jehee Lee; Adam Bargteil; Stephen N. Spencer. New York: Association for Computing Machinery, Inc, 2022. 12 (Proceedings - SIGGRAPH Asia Conference Papers).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review