MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su
3/29/2021
Keywords: Speed & Computational Efficiency, Sparse Reconstruction, Generalization, Data-Driven Method, Local Conditioning
Venue: ICCV 2021
Bibtex:
@inproceedings{chen2021mvsnerf,
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
author = {Anpei Chen and Zexiang Xu and Fuqiang Zhao and Xiaoshuai Zhang and Fanbo Xiang and Jingyi Yu and Hao Su},
title = {MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo},
year = {2021},
url = {http://arxiv.org/abs/2103.15595v2},
entrytype = {inproceedings},
id = {chen2021mvsnerf}
}
Abstract
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis. Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference. Our approach leverages plane-swept cost volumes (widely used in multi-view stereo) for geometry-aware scene reasoning, and combines this with physically based volume rendering for neural radiance field reconstruction. We train our network on real objects in the DTU dataset, and test it on three different datasets to evaluate its effectiveness and generalizability. Our approach can generalize across scenes (even indoor scenes, completely different from our training scenes of objects) and generate realistic view synthesis results using only three input images, significantly outperforming concurrent works on generalizable radiance field reconstruction. Moreover, if dense images are captured, our estimated radiance field representation can be easily fine-tuned; this leads to fast per-scene reconstruction with higher rendering quality and substantially less optimization time than NeRF.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.