{ "id": "2003.13216", "version": "v1", "published": "2020-03-30T04:39:53.000Z", "updated": "2020-03-30T04:39:53.000Z", "title": "Learning to Learn Single Domain Generalization", "authors": [ "Fengchun Qiao", "Long Zhao", "Xi Peng" ], "comment": "In CVPR 2020 (13 pages including supplementary material). The source code and pre-trained models are publicly available at: https://github.com/joffery/M-ADA", "categories": [ "cs.CV" ], "abstract": "We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create \"fictitious\" yet \"challenging\" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.", "revisions": [ { "version": "v1", "updated": "2020-03-30T04:39:53.000Z" } ], "analyses": { "keywords": [ "learn single domain generalization", "method named adversarial domain augmentation", "tackling single domain generalization", "multiple benchmark datasets", "desirable domain augmentation" ], "tags": [ "github project" ], "note": { "typesetting": "TeX", "pages": 13, "language": "en", "license": "arXiv", "status": "editable" } } }