arXiv Analytics

Sign in

arXiv:2003.13216 [cs.CV]AbstractReferencesReviewsResources

Learning to Learn Single Domain Generalization

Fengchun Qiao, Long Zhao, Xi Peng

Published 2020-03-30Version 1

We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.

Comments: In CVPR 2020 (13 pages including supplementary material). The source code and pre-trained models are publicly available at: https://github.com/joffery/M-ADA
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2211.00868 [cs.CV] (Published 2022-11-02)
tSF: Transformer-based Semantic Filter for Few-Shot Learning
Jinxiang Lai et al.
arXiv:1808.02130 [cs.CV] (Published 2018-08-06)
CPlaNet: Enhancing Image Geolocalization by Combinatorial Partitioning of Maps
arXiv:2101.07434 [cs.CV] (Published 2021-01-19)
CAA : Channelized Axial Attention for Semantic Segmentation