UNIST: Unpaired Neural Implicit Shape Translation Network

Qimin Chen, Johannes Merz, Aditya Sanghi, Hooman Shayani, Ali Mahdavi-Amiri, Hao Zhang

12/10/2021

Keywords: Positional Encoding, Shape Translation

Venue: CVPR 2022

Bibtex: @article{chen2022unist, author = {Qimin Chen and Johannes Merz and Aditya Sanghi and Hooman Shayani and Ali Mahdavi-Amiri and Hao Zhang}, title = {UNIST: Unpaired Neural Implicit Shape Translation Network}, year = {2021}, month = {Dec}, url = {http://arxiv.org/abs/2112.05381v2} }

Abstract

We introduce UNIST, the first deep neural implicit model for general-purpose, unpaired shape-to-shape translation, in both 2D and 3D domains. Our model is built on autoencoding implicit fields, rather than point clouds which represents the state of the art. Furthermore, our translation network is trained to perform the task over a latent grid representation which combines the merits of both latent-space processing and position awareness, to not only enable drastic shape transforms but also well preserve spatial features and fine local details for natural shape translations. With the same network architecture and only dictated by the input domain pairs, our model can learn both style-preserving content alteration and content-preserving style transfer. We demonstrate the generality and quality of the translation results, and compare them to well-known baselines. Code is available at https://qiminchen.github.io/unist/.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.