(with audio)

Abstract

While head-mounted devices are becoming more compact, they provide egocentric views with significant self-occlusions of the device user. Hence, existing methods often fail to accurately estimate complex 3D poses from egocentric views. In this work, we propose a new transformer-based framework to improve egocentric stereo 3D human pose estimation, which leverages the scene information and temporal context of egocentric stereo videos. Specifically, we utilize 1) depth features from our 3D scene reconstruction module with uniformly sampled windows of egocentric stereo frames, and 2) human joint queries enhanced by temporal features of the video inputs. Our method is able to accurately estimate human poses even in challenging scenarios, such as crouching and sitting. Furthermore, we introduce two new benchmark datasets, i.e., UnrealEgo2 and UnrealEgo-RW (RealWorld). The proposed datasets offer a much larger number of egocentric stereo views with a wider variety of human motions than the existing datasets, allowing comprehensive evaluation of existing and upcoming methods. Our extensive experiments show that the proposed approach significantly outperforms previous methods. We will release UnrealEgo2, UnrealEgo-RW, and trained models on our project page.


Method


Figure 3: Our method takes egocentric stereo videos as inputs. We first apply the 2D module to obtain 2D joint heatmaps and video features (Sec 4.1). The heatmaps are used with input videos to create human body masks (Sec 4.2). Next, we use uniformly sampled windows of input frames and human body masks to reconstruct a 3D scene mesh (Sec 4.3). From the mesh, we generate depth maps and depth region masks. Note that this diagram shows an example case of missing depth values for the second input frame. Lastly, the depth data, 2D joint heatmaps, video features, joint queries and the padding masks are processed in the transformer-based 3D module to estimate 3D poses (Sec 4.4).


UnrealEgo2 Dataset


UnrealEgo-RW Dataset

Downloads


Citation

BibTeX, 1 KB


@inproceedings{hakada2024unrealego2,
	title = {3D Human Pose Perception from Egocentric Stereo Videos},
	author = {Akada, Hiroyasu and Wang, Jian and Golyanik, Vladislav and Theobalt, Christian},
	booktitle = {Computer Vision and Pattern Recognition (CVPR)},
	year = {2024}
}
				

Acknowledgments

Hiroyasu Akada, Jian Wang, Vladislav Golyanik and Christian Theobalt were supported by the ERC Consolidator Grant 4DReply (770784).

Contact

For questions and clarifications, please get in touch with the first author: Hiroyasu Akada
Hiroyasu Akada hakada@mpi-inf.mpg.de

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.