Revealing Occlusions with 4D Neural Fields
Basile Van Hoorick, Purva Tendulkar, Didac Suris, Dennis Park, Simon Stent, Carl Vondrick
04/22/2022
Keywords: Dynamic/Temporal, Generalization, Local Conditioning, Object Pemanence or Occlusions; Featurized Point Cloud (instead of Voxel Grid for conditioning); Scene Priors; Segmentation; Tracking
Venue: CVPR 2022
Bibtex:
@article{hoorick2022revealing,
author = {Basile Van Hoorick and Purva Tendulkar and Didac Suris and Dennis Park and Simon Stent and Carl Vondrick},
title = {Revealing Occlusions with 4D Neural Fields},
year = {2022},
month = {Apr},
url = {http://arxiv.org/abs/2204.10916v1}
}
Abstract
For computer vision systems to operate in dynamic situations, they need to be able to represent and reason about object permanence. We introduce a framework for learning to estimate 4D visual representations from monocular RGB-D, which is able to persist objects, even once they become obstructed by occlusions. Unlike traditional video representations, we encode point clouds into a continuous representation, which permits the model to attend across the spatiotemporal context to resolve occlusions. On two large video datasets that we release along with this paper, our experiments show that the representation is able to successfully reveal occlusions for several tasks, without any architectural changes. Visualizations show that the attention mechanism automatically learns to follow occluded objects. Since our approach can be trained end-to-end and is easily adaptable, we believe it will be useful for handling occlusions in many video understanding tasks. Data, code, and models are available at https://occlusions.cs.columbia.edu/.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.