arXiv:2410.09973 [stat.ML]AbstractReferencesReviewsResources
Gradient Span Algorithms Make Predictable Progress in High Dimension
Published 2024-10-13Version 1
We prove that all 'gradient span algorithms' have asymptotically deterministic behavior on scaled Gaussian random functions as the dimension tends to infinity. In particular, this result explains the counterintuitive phenomenon that different training runs of many large machine learning models result in approximately equal cost curves despite random initialization on a complicated non-convex landscape. The distributional assumption of (non-stationary) isotropic Gaussian random functions we use is sufficiently general to serve as realistic model for machine learning training but also encompass spin glasses and random quadratic functions.
Related articles: Most relevant | Search more
arXiv:1511.03688 [stat.ML] (Published 2015-11-11)
Online Principal Component Analysis in High Dimension: Which Algorithm to Choose?
Learning Across Bandits in High Dimension via Robust Statistics
arXiv:2403.15038 [stat.ML] (Published 2024-03-22)
Estimation of multiple mean vectors in high dimension