Neural Radiance Flow for 4D View Synthesis and Video Processing

Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, Jiajun Wu

12/17/2020

Keywords: Dynamic/Temporal

Venue: ICCV 2021

Bibtex: @inproceedings{du2021nerflow, booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)}, author = {Yilun Du and Yinan Zhang and Hong-Xing Yu and Joshua B. Tenenbaum and Jiajun Wu}, title = {Neural Radiance Flow for 4D View Synthesis and Video Processing}, year = {2021}, url = {http://arxiv.org/abs/2012.09790v2}, entrytype = {inproceedings}, id = {du2021nerflow} }

Abstract

We present a method, Neural Radiance Flow (NeRFlow),to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images. Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene. By enforcing consistency across different modalities, our representation enables multi-view rendering in diverse dynamic scenes, including water pouring, robotic interaction, and real images, outperforming state-of-the-art methods for spatial-temporal view synthesis. Our approach works even when inputs images are captured with only one camera. We further demonstrate that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.