arXiv Analytics

Sign in

arXiv:1910.03915 [cs.CV]AbstractReferencesReviewsResources

Learning to Generalize One Sample at a Time with Self-Supervision

Antonio D'Innocente, Silvia Bucci, Tatiana Tommasi, Barbara Caputo

Published 2019-10-09Version 1

Although deep networks have significantly increased the performance of visual recognition methods, it is still challenging to achieve the robustness across visual domains that is necessary for real-world applications. To tackle this issue, research on domain adaptation and generalization has flourished over the last decade. An important aspect to consider when assessing the work done in the literature so far is the amount of data annotation necessary for training each approach, both at the source and target level. In this paper we argue that the data annotation overload should be minimal, as it is costly. Hence, we propose to use self-supervised learning to achieve domain generalization and adaptation. We consider learning regularities from non annotated data as an auxiliary task, and cast the problem within an Auxiliary Learning principled framework. Moreover, we suggest to further exploit the ability to learn about visual domains from non annotated images by learning from target data while testing, as data are presented to the algorithm one sample at a time. Results on three different scenarios confirm the value of our approach.

Related articles: Most relevant | Search more
arXiv:2207.11469 [cs.CV] (Published 2022-07-23, updated 2023-04-28)
Progressive Scene Text Erasing with Self-Supervision
arXiv:2108.09208 [cs.CV] (Published 2021-08-20)
Exploring Data Aggregation and Transformations to Generalize across Visual Domains
arXiv:1906.05186 [cs.CV] (Published 2019-06-12)
Boosting Few-Shot Visual Learning with Self-Supervision