arXiv Analytics

Sign in

arXiv:2310.04971 [cs.LG]AbstractReferencesReviewsResources

Understanding the Robustness of Multi-modal Contrastive Learning to Distribution Shift

Yihao Xue, Siddharth Joshi, Dang Nguyen, Baharan Mirzasoleiman

Published 2023-10-08, updated 2024-03-17Version 2

Recently, multimodal contrastive learning (MMCL) approaches, such as CLIP, have achieved a remarkable success in learning representations that are robust against distribution shift and generalize to new domains. Despite the empirical success, the mechanism behind learning such generalizable representations is not understood. In this work, we rigorously analyze this problem and uncover two mechanisms behind MMCL's robustness: \emph{intra-class contrasting}, which allows the model to learn features with a high variance, and \emph{inter-class feature sharing}, where annotated details in one class help learning other classes better. Both mechanisms prevent spurious features that are over-represented in the training data to overshadow the generalizable core features. This yields superior zero-shot classification accuracy under distribution shift. Furthermore, we theoretically demonstrate the benefits of using rich captions on robustness and explore the effect of annotating different types of details in the captions. We validate our theoretical findings through experiments, including a well-designed synthetic experiment and an experiment involving training CLIP models on MSCOCO/Conceptual Captions and evaluating them on shifted ImageNets.

Related articles: Most relevant | Search more
arXiv:2210.04166 [cs.LG] (Published 2022-10-09)
Test-time Recalibration of Conformal Predictors Under Distribution Shift Based on Unlabeled Examples
arXiv:1811.02617 [cs.LG] (Published 2018-11-06)
An Experiment with Bands and Dimensions in Classifiers
arXiv:2007.03511 [cs.LG] (Published 2020-07-06)
Estimating Generalization under Distribution Shifts via Domain-Invariant Representations