{ "id": "2210.07612", "version": "v1", "published": "2022-10-14T08:09:33.000Z", "updated": "2022-10-14T08:09:33.000Z", "title": "Monotonicity and Double Descent in Uncertainty Estimation with Gaussian Processes", "authors": [ "Liam Hodgkinson", "Chris van der Heide", "Fred Roosta", "Michael W. Mahoney" ], "comment": "40 pages, 20 figures", "categories": [ "stat.ML", "cs.LG" ], "abstract": "The quality of many modern machine learning models improves as model complexity increases, an effect that has been quantified, for predictive performance, with the non-monotonic double descent learning curve. Here, we address the overarching question: is there an analogous theory of double descent for models which estimate uncertainty? We provide a partially affirmative and partially negative answer in the setting of Gaussian processes (GP). Under standard assumptions, we prove that higher model quality for optimally-tuned GPs (including uncertainty prediction) under marginal likelihood is realized for larger input dimensions, and therefore exhibits a monotone error curve. After showing that marginal likelihood does not naturally exhibit double descent in the input dimension, we highlight related forms of posterior predictive loss that do exhibit non-monotonicity. Finally, we verify empirically that our results hold for real data, beyond our considered assumptions, and we explore consequences involving synthetic covariates.", "revisions": [ { "version": "v1", "updated": "2022-10-14T08:09:33.000Z" } ], "analyses": { "keywords": [ "gaussian processes", "uncertainty estimation", "monotonicity", "marginal likelihood", "non-monotonic double descent learning curve" ], "note": { "typesetting": "TeX", "pages": 40, "language": "en", "license": "arXiv", "status": "editable" } } }