arXiv Analytics

Sign in

arXiv:2210.07612 [stat.ML]AbstractReferencesReviewsResources

Monotonicity and Double Descent in Uncertainty Estimation with Gaussian Processes

Liam Hodgkinson, Chris van der Heide, Fred Roosta, Michael W. Mahoney

Published 2022-10-14Version 1

The quality of many modern machine learning models improves as model complexity increases, an effect that has been quantified, for predictive performance, with the non-monotonic double descent learning curve. Here, we address the overarching question: is there an analogous theory of double descent for models which estimate uncertainty? We provide a partially affirmative and partially negative answer in the setting of Gaussian processes (GP). Under standard assumptions, we prove that higher model quality for optimally-tuned GPs (including uncertainty prediction) under marginal likelihood is realized for larger input dimensions, and therefore exhibits a monotone error curve. After showing that marginal likelihood does not naturally exhibit double descent in the input dimension, we highlight related forms of posterior predictive loss that do exhibit non-monotonicity. Finally, we verify empirically that our results hold for real data, beyond our considered assumptions, and we explore consequences involving synthetic covariates.

Related articles: Most relevant | Search more
arXiv:1805.08463 [stat.ML] (Published 2018-05-22)
Variational Learning on Aggregate Outputs with Gaussian Processes
arXiv:2506.17366 [stat.ML] (Published 2025-06-20)
Gaussian Processes and Reproducing Kernels: Connections and Equivalences
arXiv:1310.6740 [stat.ML] (Published 2013-10-24)
Active Learning of Linear Embeddings for Gaussian Processes