UnrealEgo: A New Dataset
for Robust Egocentric 3D Human Motion Capture
Abstract
We present UnrealEgo, i.e. a new large-scale naturalistic dataset for egocentric 3D human pose estimation. UnrealEgo is based on an advanced concept of eyeglasses equipped with two fisheye cameras that can be used in unconstrained environments. We design their virtual prototype and attach them to 3D human models for stereo view capture. We next generate a large corpus of human motions. As a consequence, UnrealEgo is the first dataset to provide in-the-wild stereo images with the largest variety of motions among existing egocentric datasets. Furthermore, we propose a new benchmark method with a simple but effective idea of devising a 2D keypoint estimation module for stereo inputs to improve 3D human pose estimation. The extensive experiments show that our approach outperforms the previous state-of-the-art methods qualitatively and quantitatively.
Glasses-based Setup
Figure 1: Overview of the proposed UnrealEgo setup
Dataset Comparison
Figure 3: Comparison of datasets for egocentric 3D human
Motion Diversity
Proposed Benchmark Method
Figure 5: Overview of the proposed method. Our network consists of a 2D module to predict 2D heatmaps of joint positions from stereo inputs (Sec. 4.1) and a 3D module to estimate 3D joint positions from the heatmaps (Sec. 4.2).
Downloads
Please see our
license and GitHub page for
more
details on the UnrealEgo dataset.
Citation
@inproceedings{hakada2022unrealego, title = {UnrealEgo: A New Dataset for Robust Egocentric 3D Human Motion Capture}, author = {Akada, Hiroyasu and Wang, Jian and Shimada, Soshi and Takahashi, Masaki and Theobalt, Christian and Golyanik, Vladislav}, booktitle = {European Conference on Computer Vision (ECCV)}, year = {2022} }
Acknowledgments
We thank Silicon Studio Corp. for providing the fisheye plugin. Hiroyasu Akada and Masaki Takahashi were supported by the Core Research for Evolutional Science and Technology of the Japan Science and Technology Agency (JPMJCR19A1). Jian Wang, Soshi Shimada, Vladislav Golyanik and Christian Theobalt were supported by the ERC Consolidator Grant 4DReply (770784).
Contact
For questions and clarifications, please get in touch with the first author: Hiroyasu AkadaHiroyasu Akada hakada@mpi-inf.mpg.de
Jian Wang jianwang@mpi-inf.mpg.de
Vladislav Golyanik golyanik@mpi-inf.mpg.de