NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video

Jiaming Sun, Yiming Xie, Linghao Chen, Xiaowei Zhou, Hujun Bao

4/1/2021

Keywords: Robotics, SLAM, Hybrid Geometry Representation, Voxel Grid, Local Conditioning

Venue: CVPR 2021

Bibtex: @inproceedings{sun2021neuralrecon, url = {http://arxiv.org/abs/2104.00681v1}, year = {2021}, title = {NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video}, author = {Jiaming Sun and Yiming Xie and Linghao Chen and Xiaowei Zhou and Hujun Bao}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, entrytype = {inproceedings}, id = {sun2021neuralrecon} }

Abstract

We present a novel framework named NeuralRecon for real-time 3D scene reconstruction from a monocular video. Unlike previous methods that estimate single-view depth maps separately on each key-frame and fuse them later, we propose to directly reconstruct local surfaces represented as sparse TSDF volumes for each video fragment sequentially by a neural network. A learning-based TSDF fusion module based on gated recurrent units is used to guide the network to fuse features from previous fragments. This design allows the network to capture local smoothness prior and global shape prior of 3D surfaces when sequentially reconstructing the surfaces, resulting in accurate, coherent, and real-time surface reconstruction. The experiments on ScanNet and 7-Scenes datasets show that our system outperforms state-of-the-art methods in terms of both accuracy and speed. To the best of our knowledge, this is the first learning-based system that is able to reconstruct dense coherent 3D geometry in real-time.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.