Stylizing 3D Scene via Implicit Representation and Hypernetwork/Meta-learning

Pei-Ze Chiang, Meng-Shiun Tsai, Hung-Yu Tseng, Wei-sheng Lai, Wei-Chen Chiu

5/27/2021

Keywords: Editable, Data-Driven Method, Hypernetwork/Meta-learning

Venue: ARXIV 2021

Bibtex: @article{chiang2021stylizing, journal = {arXiv preprint arXiv:2105.13016}, booktitle = {ArXiv Pre-print}, author = {Pei-Ze Chiang and Meng-Shiun Tsai and Hung-Yu Tseng and Wei-sheng Lai and Wei-Chen Chiu}, title = {Stylizing 3D Scene via Implicit Representation and Hypernetwork/Meta-learning}, year = {2021}, url = {http://arxiv.org/abs/2105.13016v2}, entrytype = {article}, id = {chiang2021stylizing} }

Abstract

In this work, we aim to address the 3D scene stylization problem - generating stylized images of the scene at arbitrary novel view angles. A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches, which often leads to blurry results or inconsistent appearance. Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style. Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a Hypernetwork/Meta-learning to transfer the style information into the scene representation. In particular, our implicit representation model disentangles the scene into the geometry and appearance branches, and the Hypernetwork/Meta-learning learns to predict the parameters of the appearance branch from the reference style image. To alleviate the training difficulties and memory burden, we propose a two-stage training procedure and a patch sub-sampling approach to optimize the style and content losses with the neural radiance field model. After optimization, our model is able to render consistent novel views at arbitrary view angles with arbitrary style. Both quantitative evaluation and human subject study have demonstrated that the proposed method generates faithful stylization results with consistent appearance across different views.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.