NeRF++: Analyzing and Improving Neural Radiance Fields

Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun

10/15/2020

Keywords: Fundamentals, Sampling

Venue: ARXIV 2020

Bibtex: @article{zhang2020nerf++, journal = {arXiv preprint arXiv:2010.07492}, booktitle = {ArXiv Pre-print}, author = {Kai Zhang and Gernot Riegler and Noah Snavely and Vladlen Koltun}, title = {NeRF++: Analyzing and Improving Neural Radiance Fields}, year = {2020}, url = {http://arxiv.org/abs/2010.07492v2}, entrytype = {article}, id = {zhang2020nerf++} }

Abstract

Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings, including 360 capture of bounded scenes and forward-facing capture of bounded and unbounded scenes. NeRF fits multi-layer perceptrons (MLPs) representing view-invariant opacity and view-dependent color volumes to a set of training images, and samples novel views based on volume rendering techniques. In this technical report, we first remark on radiance fields and their potential ambiguities, namely the shape-radiance ambiguity, and analyze NeRF's success in avoiding such ambiguities. Second, we address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, unbounded 3D scenes. Our method improves view synthesis fidelity in this challenging scenario. Code is available at https://github.com/Kai-46/nerfplusplus.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.