arXiv Analytics

Sign in

arXiv:2405.16906 [stat.ML]AbstractReferencesReviewsResources

Harnessing the Power of Vicinity-Informed Analysis for Classification under Covariate Shift

Mitsuhiro Fujikawa, Yohei Akimoto, Jun Sakuma, Kazuto Fukuchi

Published 2024-05-27Version 1

Transfer learning enhances prediction accuracy on a target distribution by leveraging data from a source distribution, demonstrating significant benefits in various applications. This paper introduces a novel dissimilarity measure that utilizes vicinity information, i.e., the local structure of data points, to analyze the excess error in classification under covariate shift, a transfer learning setting where marginal feature distributions differ but conditional label distributions remain the same. We characterize the excess error using the proposed measure and demonstrate faster or competitive convergence rates compared to previous techniques. Notably, our approach is effective in situations where the non-absolute continuousness assumption, which often appears in real-world applications, holds. Our theoretical analysis bridges the gap between current theoretical findings and empirical observations in transfer learning, particularly in scenarios with significant differences between source and target distributions.

Related articles: Most relevant | Search more
arXiv:2406.03171 [stat.ML] (Published 2024-06-05)
High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization
arXiv:2002.11642 [stat.ML] (Published 2020-02-26)
Off-Policy Evaluation and Learning for External Validity under a Covariate Shift
arXiv:1809.08159 [stat.ML] (Published 2018-09-21)
Intractable Likelihood Regression for Covariate Shift by Kernel Mean Embedding