arXiv Analytics

Sign in

arXiv:2407.18158 [stat.ML]AbstractReferencesReviewsResources

Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models

Sanae Lotfi, Yilun Kuang, Brandon Amos, Micah Goldblum, Marc Finzi, Andrew Gordon Wilson

Published 2024-07-25Version 1

Large language models (LLMs) with billions of parameters excel at predicting the next token in a sequence. Recent work computes non-vacuous compression-based generalization bounds for LLMs, but these bounds are vacuous for large models at the billion-parameter scale. Moreover, these bounds are obtained through restrictive compression techniques, bounding compressed models that generate low-quality text. Additionally, the tightness of these existing bounds depends on the number of IID documents in a training set rather than the much larger number of non-IID constituent tokens, leaving untapped potential for tighter bounds. In this work, we instead use properties of martingales to derive generalization bounds that benefit from the vast number of tokens in LLM training sets. Since a dataset contains far more tokens than documents, our generalization bounds not only tolerate but actually benefit from far less restrictive compression schemes. With Monarch matrices, Kronecker factorizations, and post-training quantization, we achieve non-vacuous generalization bounds for LLMs as large as LLaMA2-70B. Unlike previous approaches, our work achieves the first non-vacuous bounds for models that are deployed in practice and generate high-quality text.

Related articles: Most relevant | Search more
arXiv:2404.19557 [stat.ML] (Published 2024-04-30)
Neural Dynamic Data Valuation
arXiv:1808.08765 [stat.ML] (Published 2018-08-27)
Identifiability of Low-Rank Sparse Component Analysis
arXiv:1905.05125 [stat.ML] (Published 2019-05-13)
Exact high-dimensional asymptotics for support vector machine