On Bias-Variance Alignment in Deep Models

0citations
0
Citations
#607
in ICLR 2024
of 2297 papers
5
Authors
1
Data Points

Abstract

Classical wisdom in machine learning holds that the generalization error can be decomposed into bias and variance, and these two terms exhibit a trade-off. However, in this paper, we show that for an ensemble of deep learning based classification models, bias and variance are aligned at a sample level, where squared bias is approximately equal to variance for correctly classified sample points. We present empirical evidence confirming this phenomenon in a variety of deep learning models and datasets. Moreover, we study this phenomenon from two theoretical perspectives: calibration and neural collapse. We first show theoretically that under the assumption that the models are well calibrated, we can observe the bias-variance alignment. Second, starting from the picture provided by the neural collapse theory, we show an approximate correlation between bias and variance.

Citation History

Jan 28, 2026
0