Dynamic View Synthesis from Dynamic Monocular Video
Chen Gao, Ayush Saraf, Johannes Kopf, Jia-Bin Huang
5/13/2021
Keywords: Dynamic/Temporal
Venue: ARXIV 2021
Bibtex:
@article{gao2021dynamic,
journal = {arXiv preprint arXiv:2105.06468},
booktitle = {ArXiv Pre-print},
author = {Chen Gao and Ayush Saraf and Johannes Kopf and Jia-Bin Huang},
title = {Dynamic View Synthesis from Dynamic Monocular Video},
year = {2021},
url = {http://arxiv.org/abs/2105.06468v1},
entrytype = {article},
id = {gao2021dynamic}
}
Abstract
We present an algorithm for generating novel views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene. Our work builds upon recent advances in neural implicit representation and uses continuous and differentiable functions for modeling the time-varying structure and the appearance of the scene. We jointly train a time-invariant static NeRF and a time-varying dynamic NeRF, and learn how to blend the results in an unsupervised manner. However, learning this implicit function from a single video is highly ill-posed (with infinitely many solutions that match the input video). To resolve the ambiguity, we introduce regularization losses to encourage a more physically plausible solution. We show extensive quantitative and qualitative results of dynamic view synthesis from casually captured videos.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.