Event cameras offer various advantages for novel view rendering compared to synchronously operating RGB cameras, and efficient event-based techniques supporting rigid scenes have been recently demonstrated in the literature. In the case of non-rigid objects, however, existing approaches additionally require sparse RGB inputs, which can be a substantial practical limitation; it remains unknown if similar models could be learned from event streams only. This paper sheds light on this challenging open question and introduces Ev4DGS, i.e.,~the first approach for novel view rendering of non-rigidly deforming objects in the explicit observation space (i.e., as RGB or greyscale images) from monocular event streams. Our method regresses a deformable 3D Gaussian Splatting representation through 1) a loss relating the outputs of the estimated model with the 2D event observation space, and 2) a coarse 3D deformation model trained from binary masks generated from events. We perform experimental comparisons on existing synthetic and newly recorded real datasets with non-rigid objects. The results demonstrate the validity of Ev4DGS and its superior performance compared to multiple na"{i}ve baselines that can be applied in our setting.
We divide the event-based reconstruction of non-rigid scenes into two stages. In the first stage, we train the coarse deformation model, which represents a non-rigid object shape as a set of points and enables the representation of large 3D deformations in a scene. In the second stage, we obtain the 3DGS representation from an event stream. The positions of the 3D Gaussians are expressed in a barycentric coordinate system, allowing them to move in accordance with the coarse deformation model.
@inproceedings{nakabayashi2025ev4dgs,
title={Ev4DGS: Novel-view Rendering of Non-Rigid Objects from Monocular Event Streams},
author={Takuya Nakabayashi and Navami Kairanda and Hideo Saito and Vladislav Golyanik},
booktitle = {The 36th British Machine Vision Conference},
year={2025}
}