NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field

Celong Liu, Zhong Li, Junsong Yuan, Yi Xu

5/15/2021

Keywords: Speed & Computational Efficiency, Hybrid Geometry Representation

Venue: ARXIV 2021

Bibtex: @article{liu2021neulf, journal = {arXiv preprint arXiv:2105.07112}, booktitle = {ArXiv Pre-print}, author = {Celong Liu and Zhong Li and Junsong Yuan and Yi Xu}, title = {NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field}, year = {2021}, url = {http://arxiv.org/abs/2105.07112v4}, entrytype = {article}, id = {liu2021neulf} }

Abstract

In this paper, we present an efficient and robust deep learning solution for novel view synthesis of complex scenes. In our approach, a 3D scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color when reaching the image plane. For efficient novel view rendering, we adopt a 4D parameterization of the light field, where each ray is characterized by a 4D parameter. We then formulate the light field as a 4D function that maps 4D coordinates to corresponding color values. We train a deep fully connected network to optimize this implicit function and memorize the 3D scene. Then, the scene-specific model is used to synthesize novel views. Different from previous light field approaches which require dense view sampling to reliably render novel views, our method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images. Our method achieves state-of-the-art novel view synthesis results while maintaining an interactive frame rate.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.