arXiv Analytics

Sign in

arXiv:2210.04166 [cs.LG]AbstractReferencesReviewsResources

Test-time Recalibration of Conformal Predictors Under Distribution Shift Based on Unlabeled Examples

Fatih Furkan Yilmaz, Reinhard Heckel

Published 2022-10-09Version 1

Modern image classifiers achieve high predictive accuracy, but the predictions typically come without reliable uncertainty estimates. Conformal prediction algorithms provide uncertainty estimates by predicting a set of classes based on the probability estimates of the classifier (for example, the softmax scores). To provide such sets, conformal prediction algorithms often rely on estimating a cutoff threshold for the probability estimates, and this threshold is chosen based on a calibration set. Conformal prediction methods guarantee reliability only when the calibration set is from the same distribution as the test set. Therefore, the methods need to be recalibrated for new distributions. However, in practice, labeled data from new distributions is rarely available, making calibration infeasible. In this work, we consider the problem of predicting the cutoff threshold for a new distribution based on unlabeled examples only. While it is impossible in general to guarantee reliability when calibrating based on unlabeled examples, we show that our method provides excellent uncertainty estimates under natural distribution shifts, and provably works for a specific model of a distribution shift.

Related articles: Most relevant | Search more
arXiv:2006.04662 [cs.LG] (Published 2020-06-08)
Rethinking Importance Weighting for Deep Learning under Distribution Shift
arXiv:2007.03511 [cs.LG] (Published 2020-07-06)
Estimating Generalization under Distribution Shifts via Domain-Invariant Representations
arXiv:1803.00676 [cs.LG] (Published 2018-03-02)
Meta-Learning for Semi-Supervised Few-Shot Classification
Mengye Ren et al.