arXiv Analytics

Sign in

arXiv:2010.04592 [cs.LG]AbstractReferencesReviewsResources

Contrastive Learning with Hard Negative Samples

Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, Stefanie Jegelka

Published 2020-10-09Version 1

We consider the question: how can you sample good negative examples for contrastive learning? We argue that, as with metric learning, learning contrastive representations benefits from hard negative samples (i.e., points that are difficult to distinguish from an anchor point). The key challenge toward using hard negatives is that contrastive methods must remain unsupervised, making it infeasible to adopt existing negative sampling strategies that use label information. In response, we develop a new class of unsupervised methods for selecting hard negative samples where the user can control the amount of hardness. A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible. The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.

Related articles: Most relevant | Search more
arXiv:2008.10150 [cs.LG] (Published 2020-08-24)
Contrastive learning, multi-view redundancy, and linear models
arXiv:2010.05113 [cs.LG] (Published 2020-10-10)
Contrastive Representation Learning: A Framework and Review
arXiv:2204.13999 [cs.LG] (Published 2022-04-29)
Statistical applications of contrastive learning