arXiv Analytics

Sign in

arXiv:2007.03511 [cs.LG]AbstractReferencesReviewsResources

Estimating Generalization under Distribution Shifts via Domain-Invariant Representations

Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka

Published 2020-07-06Version 1

When machine learning models are deployed on a test distribution different from the training distribution, they can perform poorly, but overestimate their performance. In this work, we aim to better estimate a model's performance under distribution shift, without supervision. To do so, we use a set of domain-invariant predictors as a proxy for the unknown, true target labels. Since the error of the resulting risk estimate depends on the target risk of the proxy model, we study generalization of domain-invariant representations and show that the complexity of the latent representation has a significant influence on the target risk. Empirically, our approach (1) enables self-tuning of domain adaptation models, and (2) accurately estimates the target error of given models under distribution shift. Other applications include model selection, deciding early stopping and error detection.

Comments: arXiv admin note: text overlap with arXiv:1910.05804
Journal: International Conference on Machine Learning, 2020
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2210.04166 [cs.LG] (Published 2022-10-09)
Test-time Recalibration of Conformal Predictors Under Distribution Shift Based on Unlabeled Examples
arXiv:2006.04662 [cs.LG] (Published 2020-06-08)
Rethinking Importance Weighting for Deep Learning under Distribution Shift
arXiv:2006.14422 [cs.LG] (Published 2020-06-25)
Incremental Training of Graph Neural Networks on Temporal Graphs under Distribution Shift