arXiv Analytics

Sign in

arXiv:2106.02713 [stat.ML]AbstractReferencesReviewsResources

Learning Curves for SGD on Structured Features

Blake Bordelon, Cengiz Pehlevan

Published 2021-06-04Version 1

The generalization performance of a machine learning algorithm such as a neural network depends in a non-trivial way on the structure of the data distribution. Models of generalization in machine learning theory often ignore the low-dimensional structure of natural signals, either by considering data-agnostic bounds or by studying the performance of the algorithm when trained on uncorrelated features. To analyze the influence of data structure on test loss dynamics, we study an exactly solveable model of stochastic gradient descent (SGD) which predicts test loss when training on features with arbitrary covariance structure. We solve the theory exactly for both Gaussian features and arbitrary features and we show that the simpler Gaussian model accurately predicts test loss of nonlinear random-feature models and deep neural networks trained with SGD on real datasets such as MNIST and CIFAR-10. We show that modeling the geometry of the data in the induced feature space is indeed crucial to accurately predict the test error throughout learning.

Related articles: Most relevant | Search more
arXiv:2106.13682 [stat.ML] (Published 2021-06-25)
Prediction of Hereditary Cancers Using Neural Networks
arXiv:2305.02304 [stat.ML] (Published 2023-05-03)
New Equivalences Between Interpolation and SVMs: Kernels and Structured Features
arXiv:2501.04272 [stat.ML] (Published 2025-01-08)
On weight and variance uncertainty in neural networks for regression tasks