Portrait Neural Radiance Fields from a Single Image
Chen Gao, Yichang Shih, Wei-Sheng Lai, Chia-Kai Liang, Jia-Bin Huang
12/10/2020
Keywords: Human (Head), Sparse Reconstruction, Generalization, Data-Driven Method
Venue: ARXIV 2020
Bibtex:
@article{gao2020portraitnerf,
journal = {arXiv preprint arXiv:2012.05903},
booktitle = {ArXiv Pre-print},
author = {Chen Gao and Yichang Shih and Wei-Sheng Lai and Chia-Kai Liang and Jia-Bin Huang},
title = {Portrait Neural Radiance Fields from a Single Image},
year = {2020},
url = {http://arxiv.org/abs/2012.05903v2},
entrytype = {article},
id = {gao2020portraitnerf}
}
Abstract
We present a method for estimating Neural Radiance Fields (NeRF) from a single headshot portrait. While NeRF has demonstrated high-quality view synthesis, it requires multiple images of static scenes and thus impractical for casual captures and moving subjects. In this work, we propose to pretrain the weights of a multilayer perceptron (MLP), which implicitly models the volumetric density and colors, with a meta-learning framework using a light stage portrait dataset. To improve the generalization to unseen faces, we train the MLP in the canonical coordinate space approximated by 3D face morphable models. We quantitatively evaluate the method using controlled captures and demonstrate the generalization to real portrait images, showing favorable results against state-of-the-arts.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.