arXiv Analytics

Sign in

arXiv:2310.09250 [cs.LG]AbstractReferencesReviewsResources

It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep Models

Lin Chen, Michal Lukasik, Wittawat Jitkrittum, Chong You, Sanjiv Kumar

Published 2023-10-13Version 1

Classical wisdom in machine learning holds that the generalization error can be decomposed into bias and variance, and these two terms exhibit a \emph{trade-off}. However, in this paper, we show that for an ensemble of deep learning based classification models, bias and variance are \emph{aligned} at a sample level, where squared bias is approximately \emph{equal} to variance for correctly classified sample points. We present empirical evidence confirming this phenomenon in a variety of deep learning models and datasets. Moreover, we study this phenomenon from two theoretical perspectives: calibration and neural collapse. We first show theoretically that under the assumption that the models are well calibrated, we can observe the bias-variance alignment. Second, starting from the picture provided by the neural collapse theory, we show an approximate correlation between bias and variance.

Related articles: Most relevant | Search more
arXiv:2004.10824 [cs.LG] (Published 2020-04-22)
Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations
arXiv:1911.02048 [cs.LG] (Published 2019-11-05)
Guided Layer-wise Learning for Deep Models using Side Information
arXiv:2207.09455 [cs.LG] (Published 2022-07-19)
To update or not to update? Neurons at equilibrium in deep models