UV Volumes for Real-time Rendering of Editable Free-view Human Performance

Yue Chen, Xuan Wang, Xingyu Chen, Qi Zhang, Xiaoyu Li, Yu Guo, Jue Wang, Fei Wang

03/27/2022

Keywords: Speed & Computational Efficiency, Sparse Reconstruction, Dynamic/Temporal, Human (Body), Editable

Venue: ARXIV 2022

Paper Citation Code coming soon...
Bibtex: @article{chen2022uvvolumes, url = {http://arxiv.org/abs/2203.14402v3}, month = {Mar}, year = {2022}, title = {UV Volumes for Real-time Rendering of Editable Free-view Human Performance}, author = {Yue Chen and Xuan Wang and Xingyu Chen and Qi Zhang and Xiaoyu Li and Yu Guo and Jue Wang and Fei Wang} }

Abstract

Neural volume rendering enables photo-realistic renderings of a human performer in free-view, a critical task in immersive VR/AR applications. But the practice is severely limited by high computational costs in the rendering process. To solve this problem, we propose the UV Volumes, a new approach that can render an editable free-view video of a human performer in realtime. It separates the high-frequency (i.e., non-smooth) human appearance from the 3D volume, and encodes them into 2D neural texture stacks (NTS). The smooth UV volumes allow much smaller and shallower neural networks to obtain densities and texture coordinates in 3D while capturing detailed appearance in 2D NTS. For editability, the mapping between the parameterized human model and the smooth texture coordinates allows us a better generalization on novel poses and shapes. Furthermore, the use of NTS enables interesting applications, e.g., retexturing. Extensive experiments on CMU Panoptic, ZJU Mocap, and H36M datasets show that our model can render 960 * 540 images in 30FPS on average with comparable photo-realism to state-of-the-art methods. The project and supplementary materials are available at https://github.com/fanegg/UV-Volumes.

Citation Graph
(Double click on nodes to open corresponding papers' pages)

* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.