arXiv Analytics

Sign in

arXiv:2405.18296 [cs.LG]AbstractReferencesReviewsResources

Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training

Anchit Jain, Rozhin Nobahari, Aristide Baratin, Stefano Sarao Mannelli

Published 2024-05-28Version 1

Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations. Current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup modeling different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setting, which we prove to be exact in high dimension. Notably, our analysis reveals how different properties of sub-populations influence bias at different timescales, showing a shifting preference of the classifier during training. Applying our findings to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias. We empirically validate our results in more complex scenarios by training deeper networks on synthetic and real datasets, including CIFAR10, MNIST, and CelebA.

Related articles: Most relevant | Search more
arXiv:2002.09917 [cs.LG] (Published 2020-02-23)
Improve SGD Training via Aligning Min-batches
arXiv:1907.07384 [cs.LG] (Published 2019-07-17)
Feature Selection via Mutual Information: New Theoretical Insights
arXiv:2006.02682 [cs.LG] (Published 2020-06-04)
Some Theoretical Insights into Wasserstein GANs