arXiv Analytics

Sign in

arXiv:2406.08447 [cs.LG]AbstractReferencesReviewsResources

The Impact of Initialization on LoRA Finetuning Dynamics

Soufiane Hayou, Nikhil Ghosh, Bin Yu

Published 2024-06-12Version 1

In this paper, we study the role of initialization in Low Rank Adaptation (LoRA) as originally introduced in Hu et al. (2021). Essentially, to start from the pretrained model as initialization for finetuning, one can either initialize B to zero and A to random (default initialization in PEFT package), or vice-versa. In both cases, the product BA is equal to zero at initialization, which makes finetuning starts from the pretrained model. These two initialization schemes are seemingly similar. They should in-principle yield the same performance and share the same optimal learning rate. We demonstrate that this is an incorrect intuition and that the first scheme (initializing B to zero and A to random) on average yields better performance compared to the other scheme. Our theoretical analysis shows that the reason behind this might be that the first initialization allows the use of larger learning rates (without causing output instability) compared to the second initialization, resulting in more efficient learning of the first scheme. We validate our results with extensive experiments on LLMs.

Comments: TDLR: Different Initializations lead to completely different finetuning dynamics. One initialization (set A random and B zero) is generally better than the natural opposite initialization. arXiv admin note: text overlap with arXiv:2402.12354
Categories: cs.LG, cs.AI, cs.CL, stat.ML
Related articles: Most relevant | Search more
arXiv:2209.07263 [cs.LG] (Published 2022-09-15)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
arXiv:2205.07856 [cs.LG] (Published 2022-05-08)
Impact of Learning Rate on Noise Resistant Property of Deep Learning Models
arXiv:2003.02389 [cs.LG] (Published 2020-03-05)
Comparing Rewinding and Fine-tuning in Neural Network Pruning