Bibtex: @article{ramon2021h3dnet, journal = {arXiv preprint arXiv:2107.12512}, booktitle = {ArXiv Pre-print}, author = {Eduard Ramon and Gil Triginer and Janna Escur and Albert Pumarola and Jaime Garcia and Xavier Giro-i-Nieto and Francesc Moreno-Noguer}, title = {H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction}, year = {2021}, url = {http://arxiv.org/abs/2107.12512v1}, entrytype = {article}, id = {ramon2021h3dnet} }

Abstract

Recent learning approaches that implicitly represent surface geometry using coordinate-based neural representations have shown impressive results in the problem of multi-view 3D reconstruction. The effectiveness of these techniques is, however, subject to the availability of a large number (several tens) of input views of the scene, and computationally demanding optimizations. In this paper, we tackle these limitations for the specific problem of few-shot full 3D head reconstruction, by endowing coordinate-based representations with a probabilistic shape prior that enables faster convergence and better generalization when using few input images (down to three). First, we learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations. At test time, we jointly overfit two coordinate-based neural networks to the scene, one modeling the geometry and another estimating the surface radiance, using implicit differentiable rendering. We devise a two-stage optimization strategy in which the learned prior is used to initialize and constrain the geometry during an initial optimization phase. Then, the prior is unfrozen and fine-tuned to the scene. By doing this, we achieve high-fidelity head reconstructions, including hair and shoulders, and with a high level of detail that consistently outperforms both state-of-the-art 3D Morphable Models methods in the few-shot scenario, and non-parametric methods when large sets of views are available.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.