NeRF-SR: High-Quality Neural Radiance Fields using Supersampling
Chen Wang, Xian Wu, Yuan-Chen Guo, Song-Hai Zhang, Yu-Wing Tai, Shi-Min Hu
12/03/2021
Keywords: Graphics, 2D Image Neural Fields, Sampling, Image-Based Rendering
Venue: Multimedia 2022
Bibtex:
@article{wang2022nerfsr,
author = {Chen Wang and Xian Wu and Yuan-Chen Guo and Song-Hai Zhang and Yu-Wing Tai and Shi-Min Hu},
title = {NeRF-SR: High-Quality Neural Radiance Fields using Supersampling},
doi = {10.1145/3503161.3547808},
year = {2021},
month = {Dec},
url = {http://arxiv.org/abs/2112.01759v3}
}
Abstract
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs. Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron. While producing images at arbitrary scales, NeRF struggles with resolutions that go beyond observed images. Our key insight is that NeRF benefits from 3D consistency, which means an observed pixel absorbs information from nearby views. We first exploit it by a supersampling strategy that shoots multiple rays at each image pixel, which further enforces multi-view constraint at a sub-pixel level. Then, we show that NeRF-SR can further boost the performance of supersampling by a refinement network that leverages the estimated depth at hand to hallucinate details from related patches on only one HR reference image. Experiment results demonstrate that NeRF-SR generates high-quality results for novel view synthesis at HR on both synthetic and real-world datasets without any external information.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.