Bibtex: @article{bi2020neural, journal = {arXiv preprint arXiv:2008.03824}, booktitle = {ArXiv Pre-print}, author = {Sai Bi and Zexiang Xu and Pratul Srinivasan and Ben Mildenhall and Kalyan Sunkavalli and Milos Hasan and Yannick Hold-Geoffroy and David Kriegman and Ravi Ramamoorthi}, title = {Neural Reflectance Fields for Appearance Acquisition}, year = {2020}, url = {http://arxiv.org/abs/2008.03824v2}, entrytype = {article}, id = {bi2020neural} }

Abstract

We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene using a fully-connected neural network. We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light. We demonstrate that neural reflectance fields can be estimated from images captured with a simple collocated camera-light setup, and accurately model the appearance of real-world scenes with complex geometry and reflectance. Once estimated, they can be used to render photo-realistic images under novel viewpoint and (non-collocated) lighting conditions and accurately reproduce challenging effects like specularities, shadows and occlusions. This allows us to perform high-quality view synthesis and relighting that is significantly better than previous methods. We also demonstrate that we can compose the estimated neural reflectance field of a real scene with traditional scene models and render them using standard Monte Carlo rendering engines. Our work thus enables a complete pipeline from high-quality and practical appearance acquisition to 3D scene composition and rendering.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.