{ "id": "2404.08080", "version": "v1", "published": "2024-04-11T18:35:49.000Z", "updated": "2024-04-11T18:35:49.000Z", "title": "Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models", "authors": [ "Tanmay Gautam", "Youngsuk Park", "Hao Zhou", "Parameswaran Raman", "Wooseok Ha" ], "comment": "29 pages, 25 tables, 9 figures", "categories": [ "cs.LG", "cs.AI", "cs.CL", "math.OC" ], "abstract": "Fine-tuning language models (LMs) has demonstrated success in a wide array of downstream tasks. However, as LMs are scaled up, the memory requirements for backpropagation become prohibitively high. Zeroth-order (ZO) optimization methods can leverage memory-efficient forward passes to estimate gradients. More recently, MeZO, an adaptation of ZO-SGD, has been shown to consistently outperform zero-shot and in-context learning when combined with suitable task prompts. In this work, we couple ZO methods with variance reduction techniques to enhance stability and convergence for inference-based LM fine-tuning. We introduce Memory-Efficient Zeroth-Order Stochastic Variance-Reduced Gradient (MeZO-SVRG) and demonstrate its efficacy across multiple LM fine-tuning tasks, eliminating the reliance on task-specific prompts. Evaluated across a range of both masked and autoregressive LMs on benchmark GLUE tasks, MeZO-SVRG outperforms MeZO with up to 20% increase in test accuracies in both full- and partial-parameter fine-tuning settings. MeZO-SVRG benefits from reduced computation time as it often surpasses MeZO's peak test accuracy with a $2\\times$ reduction in GPU-hours. MeZO-SVRG significantly reduces the required memory footprint compared to first-order SGD, i.e. by $2\\times$ for autoregressive models. Our experiments highlight that MeZO-SVRG's memory savings progressively improve compared to SGD with larger batch sizes.", "revisions": [ { "version": "v1", "updated": "2024-04-11T18:35:49.000Z" } ], "analyses": { "keywords": [ "fine-tuning language models", "variance-reduced zeroth-order methods", "zeroth-order stochastic variance-reduced gradient", "surpasses mezos peak test accuracy" ], "note": { "typesetting": "TeX", "pages": 29, "language": "en", "license": "arXiv", "status": "editable" } } }