Text2Mesh: Text-Driven Neural Stylization for Meshes
Oscar Michel, Roi Bar-On, Richard Liu, Sagie Benaim, Rana Hanocka
12/6/2021
Keywords: Editable, Generative Models, Neural Style Field
Venue: ARXIV 2022
Bibtex:
@article{michel2022text2mesh,
author = {Oscar Michel and Roi Bar-On and Richard Liu and Sagie Benaim and Rana Hanocka},
title = {Text2Mesh: Text-Driven Neural Stylization for Meshes},
year = {2021},
month = {Dec},
url = {http://arxiv.org/abs/2112.03221v1},
entrytype = {article},
id = {michel2022text2mesh}
}
Abstract
In this work, we develop intuitive controls for editing the style of 3D objects. Our framework, Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which conform to a target text prompt. We consider a disentangled representation of a 3D object using a fixed mesh input (content) coupled with a learned neural network, which we term neural style field network. In order to modify style, we obtain a similarity score between a text prompt (describing style) and a stylized mesh by harnessing the representational power of CLIP. Text2Mesh requires neither a pre-trained generative model nor a specialized 3D mesh dataset. It can handle low-quality meshes (non-manifold, boundaries, etc.) with arbitrary genus, and does not require UV parameterization. We demonstrate the ability of our technique to synthesize a myriad of styles over a wide variety of 3D meshes.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.