arXiv Analytics

Sign in

arXiv:2006.03810 [cs.CV]AbstractReferencesReviewsResources

An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation

Deepan Das, Haley Massa, Abhimanyu Kulkarni, Theodoros Rekatsinas

Published 2020-06-06Version 1

Generalization Performance of Deep Learning models trained using the Empirical Risk Minimization can be improved significantly by using Data Augmentation strategies such as simple transformations, or using Mixed Samples. In this work, we attempt to empirically analyse the impact of such augmentation strategies on the transfer of generalization between teacher and student models in a distillation setup. We observe that if a teacher is trained using any of the mixed sample augmentation strategies, the student model distilled from it is impaired in its generalization capabilities. We hypothesize that such strategies limit a model's capability to learn example-specific features, leading to a loss in quality of the supervision signal during distillation, without impacting it's standalone prediction performance. We present a novel KL-Divergence based metric to quantitatively measure the generalization capacity of the different networks.

Related articles: Most relevant | Search more
arXiv:2207.10425 [cs.CV] (Published 2022-07-21)
KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS
arXiv:1909.10754 [cs.CV] (Published 2019-09-24)
FEED: Feature-level Ensemble for Knowledge Distillation
arXiv:2106.05237 [cs.CV] (Published 2021-06-09)
Knowledge distillation: A good teacher is patient and consistent