SAL: Sign Agnostic Learning of Shapes from Raw Data
Matan Atzmon, Yaron Lipman
11/23/2019
Keywords:
Venue: CVPR 2020
Bibtex:
@inproceedings{atzmon2020sal,
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
author = {Matan Atzmon and Yaron Lipman},
title = {SAL: Sign Agnostic Learning of Shapes from Raw Data},
year = {2020},
url = {http://arxiv.org/abs/1911.10414v2},
entrytype = {inproceedings},
id = {atzmon2020sal}
}
Abstract
Recently, neural networks have been used as implicit representations for surface reconstruction, modelling, learning, and generation. So far, training neural networks to be implicit representations of surfaces required training data sampled from a ground-truth signed implicit functions such as signed distance or occupancy functions, which are notoriously hard to compute. In this paper we introduce Sign Agnostic Learning (SAL), a deep learning approach for learning implicit shape representations directly from raw, unsigned geometric data, such as point clouds and triangle soups. We have tested SAL on the challenging problem of surface reconstruction from an un-oriented point cloud, as well as end-to-end human shape space learning directly from raw scans dataset, and achieved state of the art reconstructions compared to current approaches. We believe SAL opens the door to many geometric deep learning applications with real-world data, alleviating the usual painstaking, often manual pre-process.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.