arXiv Analytics

Sign in

arXiv:2307.11972 [stat.ML]AbstractReferencesReviewsResources

Out-of-Distribution Optimality of Invariant Risk Minimization

Shoji Toyota, Kenji Fukumizu

Published 2023-07-22Version 1

Deep Neural Networks often inherit spurious correlations embedded in training data and hence may fail to generalize to unseen domains, which have different distributions from the domain to provide training data. M. Arjovsky et al. (2019) introduced the concept out-of-distribution (o.o.d.) risk, which is the maximum risk among all domains, and formulated the issue caused by spurious correlations as a minimization problem of the o.o.d. risk. Invariant Risk Minimization (IRM) is considered to be a promising approach to minimize the o.o.d. risk: IRM estimates a minimum of the o.o.d. risk by solving a bi-level optimization problem. While IRM has attracted considerable attention with empirical success, it comes with few theoretical guarantees. Especially, a solid theoretical guarantee that the bi-level optimization problem gives the minimum of the o.o.d. risk has not yet been established. Aiming at providing a theoretical justification for IRM, this paper rigorously proves that a solution to the bi-level optimization problem minimizes the o.o.d. risk under certain conditions. The result also provides sufficient conditions on distributions providing training data and on a dimension of feature space for the bi-leveled optimization problem to minimize the o.o.d. risk.

Related articles: Most relevant | Search more
arXiv:1402.1869 [stat.ML] (Published 2014-02-08, updated 2014-06-07)
On the Number of Linear Regions of Deep Neural Networks
arXiv:1905.10634 [stat.ML] (Published 2019-05-25)
Adaptive, Distribution-Free Prediction Intervals for Deep Neural Networks
arXiv:1901.02182 [stat.ML] (Published 2019-01-08)
Comments on "Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?"