arXiv:2403.17346 [cs.CV]AbstractReferencesReviewsResources
TRAM: Global Trajectory and Motion of 3D Humans from in-the-wild Videos
Yufu Wang, Ziyun Wang, Lingjie Liu, Kostas Daniilidis
Published 2024-03-26Version 1
We propose TRAM, a two-stage method to reconstruct a human's global trajectory and motion from in-the-wild videos. TRAM robustifies SLAM to recover the camera motion in the presence of dynamic humans and uses the scene background to derive the motion scale. Using the recovered camera as a metric-scale reference frame, we introduce a video transformer model (VIMO) to regress the kinematic body motion of a human. By composing the two motions, we achieve accurate recovery of 3D humans in the world space, reducing global motion errors by 60% from prior work. https://yufu-wang.github.io/tram4d/
Comments: The project website: https://yufu-wang.github.io/tram4d/
Categories: cs.CV
Keywords: 3d humans, in-the-wild videos, achieve accurate recovery, tram robustifies slam, metric-scale reference frame
Tags: github project
Related articles: Most relevant | Search more
arXiv:2003.02050 [cs.CV] (Published 2020-03-04)
Learning to Transfer Texture from Clothing Images to 3D Humans
arXiv:2002.05447 [cs.CV] (Published 2020-02-13)
Emotion Recognition for In-the-wild Videos
arXiv:2306.11541 [cs.CV] (Published 2023-06-20)
Audio-Driven 3D Facial Animation from In-the-Wild Videos