arXiv Analytics

Sign in

arXiv:1603.09200 [cs.CV]AbstractReferencesReviewsResources

Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos

Alejandro Betancourt, Natalia Díaz-Rodríguez, Emilia Barakova, Lucio Marcenaro, Matthias Rauterberg, Carlo Regazzoni

Published 2016-03-30Version 1

Wearable cameras stand out as one of the most promising devices for the coming years, and as a consequence, the demand of computer algorithms to automatically understand these videos has been increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location recorded. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. As an application case, the proposed unsupervised strategy is used as a switching mechanism to improve the hand-detection problem in egocentric videos under a multi-model approach.

Comments: Submitted for publication
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1707.05564 [cs.CV] (Published 2017-07-18)
Batch based Monocular SLAM for Egocentric Videos
arXiv:1812.09570 [cs.CV] (Published 2018-12-22)
EgoReID: Person re-identification in Egocentric Videos Acquired by Mobile Devices with First-Person Point-of-View
arXiv:1904.05250 [cs.CV] (Published 2019-04-10)
Next-Active-Object prediction from Egocentric Videos