S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
Ze Yang, Shenlong Wang, Sivabalan Manivasagam, Zeng Huang, Wei-Chiu Ma, Xinchen Yan, Ersin Yumer, Raquel Urtasun
1/17/2021
Keywords: Human (Body), Editable, Voxel Grid, Local Conditioning
Venue: CVPR 2021
Bibtex:
@inproceedings{yang2021s3,
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
author = {Ze Yang and Shenlong Wang and Sivabalan Manivasagam and Zeng Huang and Wei-Chiu Ma and Xinchen Yan and Ersin Yumer and Raquel Urtasun},
title = {S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling},
year = {2021},
url = {http://arxiv.org/abs/2101.06571v1},
entrytype = {inproceedings},
id = {yang2021s3}
}
Abstract
Constructing and animating humans is an important component for building virtual worlds in a wide variety of applications such as virtual reality or robotics testing in simulation. As there are exponentially many variations of humans with different shape, pose and clothing, it is critical to develop methods that can automatically reconstruct and animate humans at scale from real world data. Towards this goal, we represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data. This representation enables us to handle a wide variety of different pedestrian shapes and poses without explicitly fitting a human parametric body model, allowing us to handle a wider range of human geometries and topologies. We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods. Furthermore, our re-animation experiments show that we can generate 3D human animations at scale from a single RGB image (and/or an optional LiDAR sweep) as input.
Citation Graph
(Double click on nodes to open corresponding papers' pages)
(Double click on nodes to open corresponding papers' pages)
* Showing citation graph for papers within our database. Data retrieved from Semantic Scholar. For full citation graphs, visit ConnectedPapers.