arXiv Analytics

Sign in

arXiv:2002.08973 [cs.LG]AbstractReferencesReviewsResources

Affinity and Diversity: Quantifying Mechanisms of Data Augmentation

Raphael Gontijo-Lopes, Sylvia J. Smullin, Ekin D. Cubuk, Ethan Dyer

Published 2020-02-20Version 1

Though data augmentation has become a standard component of deep neural network training, the underlying mechanism behind the effectiveness of these techniques remains poorly understood. In practice, augmentation policies are often chosen using heuristics of either distribution shift or augmentation diversity. Inspired by these, we seek to quantify how data augmentation improves model generalization. To this end, we introduce interpretable and easy-to-compute measures: Affinity and Diversity. We find that augmentation performance is predicted not by either of these alone but by jointly optimizing the two.

Related articles: Most relevant | Search more
arXiv:2004.04795 [cs.LG] (Published 2020-04-09)
Exemplar VAEs for Exemplar based Generation and Data Augmentation
arXiv:2203.03304 [cs.LG] (Published 2022-03-07)
Regularising for invariance to data augmentation improves supervised learning
arXiv:2107.00644 [cs.LG] (Published 2021-07-01)
Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation