arXiv:1603.09200 [cs.CV]AbstractReferencesReviewsResources
Unsupervised Understanding of Location and Illumination Changes in Egocentric Videos
Alejandro Betancourt, Natalia Díaz-Rodríguez, Emilia Barakova, Lucio Marcenaro, Matthias Rauterberg, Carlo Regazzoni
Published 2016-03-30Version 1
Wearable cameras stand out as one of the most promising devices for the coming years, and as a consequence, the demand of computer algorithms to automatically understand these videos has been increasing quickly. An automatic understanding of these videos is not an easy task, and its mobile nature implies important challenges to be faced, such as the changing light conditions and the unrestricted locations recorded. This paper proposes an unsupervised strategy based on global features and manifold learning to endow wearable cameras with contextual information regarding the light conditions and the location recorded. Results show that non-linear manifold methods can capture contextual patterns from global features without compromising large computational resources. As an application case, the proposed unsupervised strategy is used as a switching mechanism to improve the hand-detection problem in egocentric videos under a multi-model approach.