arXiv Analytics

Sign in

arXiv:2002.03532 [cs.LG]AbstractReferencesReviewsResources

Understanding and Improving Knowledge Distillation

Jiaxi Tang, Rakesh Shivanna, Zhe Zhao, Dong Lin, Anima Singh, Ed H. Chi, Sagar Jain

Published 2020-02-10Version 1

Knowledge distillation is a model-agnostic technique to improve model quality while having a fixed capacity budget. It is a commonly used technique for model compression, where a higher capacity teacher model with better quality is used to train a more compact student model with better inference efficiency. Through distillation, one hopes to benefit from student's compactness, without sacrificing too much on model quality. Despite the large success of knowledge distillation, better understanding of how it benefits student model's training dynamics remains under-explored. In this paper, we dissect the effects of knowledge distillation into three main factors: (1) benefits inherited from label smoothing, (2) example re-weighting based on teacher's confidence on ground-truth, and (3) prior knowledge of optimal output (logit) layer geometry. Using extensive systematic analyses and empirical studies on synthetic and real-world datasets, we confirm that the aforementioned three factors play a major role in knowledge distillation. Furthermore, based on our findings, we propose a simple, yet effective technique to improve knowledge distillation empirically.

Related articles: Most relevant | Search more
arXiv:2308.09544 [cs.LG] (Published 2023-08-18)
Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning
arXiv:2310.02572 [cs.LG] (Published 2023-10-04)
Improving Knowledge Distillation with Teacher's Explanation
arXiv:2407.04871 [cs.LG] (Published 2024-07-05)
Improving Knowledge Distillation in Transfer Learning with Layer-wise Learning Rates