arXiv Analytics

Sign in

arXiv:2407.04871 [cs.LG]AbstractReferencesReviewsResources

Improving Knowledge Distillation in Transfer Learning with Layer-wise Learning Rates

Shirley Kokane, Mostofa Rafid Uddin, Min Xu

Published 2024-07-05Version 1

Transfer learning methods start performing poorly when the complexity of the learning task is increased. Most of these methods calculate the cumulative differences of all the matched features and then use them to back-propagate that loss through all the layers. Contrary to these methods, in this work, we propose a novel layer-wise learning scheme that adjusts learning parameters per layer as a function of the differences in the Jacobian/Attention/Hessian of the output activations w.r.t. the network parameters. We applied this novel scheme for attention map-based and derivative-based (first and second order) transfer learning methods. We received improved learning performance and stability against a wide range of datasets. From extensive experimental evaluation, we observed that the performance boost achieved by our method becomes more significant with the increasing difficulty of the learning task.

Related articles: Most relevant | Search more
arXiv:2308.09544 [cs.LG] (Published 2023-08-18)
Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning
arXiv:2310.02572 [cs.LG] (Published 2023-10-04)
Improving Knowledge Distillation with Teacher's Explanation
arXiv:2106.06788 [cs.LG] (Published 2021-06-12)
Learngene: From Open-World to Your Learning Task