arXiv Analytics

Sign in

arXiv:2409.16223 [cs.LG]AbstractReferencesReviewsResources

Fine-Tuning is Fine, if Calibrated

Zheda Mai, Arpita Chowdhury, Ping Zhang, Cheng-Hao Tu, Hong-You Chen, Vardaan Pahuja, Tanya Berger-Wolf, Song Gao, Charles Stewart, Yu Su, Wei-Lun Chao

Published 2024-09-24Version 1

Fine-tuning is arguably the most straightforward way to tailor a pre-trained model (e.g., a foundation model) to downstream applications, but it also comes with the risk of losing valuable knowledge the model had learned in pre-training. For example, fine-tuning a pre-trained classifier capable of recognizing a large number of classes to master a subset of classes at hand is shown to drastically degrade the model's accuracy in the other classes it had previously learned. As such, it is hard to further use the fine-tuned model when it encounters classes beyond the fine-tuning data. In this paper, we systematically dissect the issue, aiming to answer the fundamental question, ''What has been damaged in the fine-tuned model?'' To our surprise, we find that the fine-tuned model neither forgets the relationship among the other classes nor degrades the features to recognize these classes. Instead, the fine-tuned model often produces more discriminative features for these other classes, even if they were missing during fine-tuning! {What really hurts the accuracy is the discrepant logit scales between the fine-tuning classes and the other classes}, implying that a simple post-processing calibration would bring back the pre-trained model's capability and at the same time unveil the feature improvement over all classes. We conduct an extensive empirical study to demonstrate the robustness of our findings and provide preliminary explanations underlying them, suggesting new directions for future theoretical analysis. Our code is available at https://github.com/OSU-MLB/Fine-Tuning-Is-Fine-If-Calibrated.

Comments: The first three authors contribute equally
Categories: cs.LG, cs.AI, cs.CV
Related articles: Most relevant | Search more
arXiv:2211.04878 [cs.LG] (Published 2022-11-09)
Foundation Models for Semantic Novelty in Reinforcement Learning
arXiv:2305.18425 [cs.LG] (Published 2023-05-28)
Efficient Storage of Fine-Tuned Models via Low-Rank Approximation of Weight Residuals
arXiv:2501.15955 [cs.LG] (Published 2025-01-27)
Rethinking the Bias of Foundation Model under Long-tailed Distribution