arXiv Analytics

Sign in

arXiv:2004.07711 [cs.CV]AbstractReferencesReviewsResources

Knowledge Distillation for Action Anticipation via Label Smoothing

Guglielmo Camporese, Pasquale Coscia, Antonino Furnari, Giovanni Maria Farinella, Lamberto Ballan

Published 2020-04-16Version 1

Human capability to anticipate near future from visual observations and non-verbal cues is essential for developing intelligent systems that need to interact with people. Several research areas, such as human-robot interaction (HRI), assisted living or autonomous driving need to foresee future events to avoid crashes or help visually impaired people. Such challenging task requires to capture and understand the underlying structure of the analyzed domain in order to reduce prediction uncertainty. Since the action anticipation task can be seen as a multi-label problem with missing labels, we design and extend the idea of label smoothing extracting semantics from the target labels. We show that such generalization is equivalent to considering a knowledge distillation framework where a teacher injects useful semantic information into the model during training. In our experiments, we implement a multi-modal framework based on long short-term memory (LSTM) networks to anticipate future actions which is able to summarise past observations while making predictions of the future at different time steps. To validate our soft labeling procedure we perform extensive experiments on the egocentric EPIC-Kitchens dataset which includes more than 2500 action classes. The experiments show that label smoothing systematically improves performance of state-of-the-art models.

Related articles: Most relevant | Search more
arXiv:1605.08110 [cs.CV] (Published 2016-05-26)
Video Summarization with Long Short-term Memory
arXiv:2009.08233 [cs.CV] (Published 2020-09-17)
Label Smoothing and Adversarial Robustness
arXiv:2401.02052 [cs.CV] (Published 2023-10-02)
Encoder-Decoder Based Long Short-Term Memory (LSTM) Model for Video Captioning