CityNeRF: Building NeRF at City Scale
Yuanbo Xiangli, Linning Xu, Xingang Pan, Nanxuan Zhao, Anyi Rao, Christian Theobalt, Bo Dai, Dahua Lin
12/10/2021
Keywords: Coarse-to-Fine, Large-Scale Scenes, Network Architecture
Venue: ARXIV 2021
Bibtex:
@article{xiangli2021citynerf,
author = {Yuanbo Xiangli and Linning Xu and Xingang Pan and Nanxuan Zhao and Anyi Rao and Christian Theobalt and Bo Dai and Dahua Lin},
title = {CityNeRF: Building NeRF at City Scale},
year = {2021},
month = {Dec},
url = {http://arxiv.org/abs/2112.05504v2}
}
Abstract
Neural Radiance Field (NeRF) has achieved outstanding performance in modeling 3D objects and controlled scenes, usually under a single scale. In this work, we make the first attempt to bring NeRF to city-scale, with views ranging from satellite-level that captures the overview of a city, to ground-level imagery showing complex details of an architecture. The wide span of camera distance to the scene yields multi-scale data with different levels of detail and spatial coverage, which casts great challenges to vanilla NeRF and biases it towards compromised results. To address these issues, we introduce CityNeRF, a progressive learning paradigm that grows the NeRF model and training set synchronously. Starting from fitting distant views with a shallow base block, as training progresses, new blocks are appended to accommodate the emerging details in the increasingly closer views. The strategy effectively activates high-frequency channels in the positional encoding and unfolds more complex details as the training proceeds. We demonstrate the superiority of CityNeRF in modeling diverse city-scale scenes with drastically varying views, and its support for rendering views in different levels of detail.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.