arXiv Analytics

Sign in

arXiv:2003.04858 [cs.CV]AbstractReferencesReviewsResources

Unpaired Image-to-Image Translation using Adversarial Consistency Loss

Yihao Zhao, Ruihai Wu, Hao Dong

Published 2020-03-10Version 1

Unpaired image-to-image translation is a class of vision problems whose goal is to find the mapping between different image domains using unpaired training data. Cycle-consistency loss is a widely used constraint for such problems. However, due to the strict pixel-level constraint, it cannot perform geometric changes, remove large objects, or ignore irrelevant texture. In this paper, we propose a novel adversarial-consistency loss for image-to-image translation. This loss does not require the translated image to be translated back to be a specific source image but can encourage the translated images to retain important features of the source images and overcome the drawbacks of cycle-consistency loss noted above. Our method achieves state-of-the-art results on three challenging tasks: glasses removal, male-to-female translation, and selfie-to-anime translation.

Related articles: Most relevant | Search more
arXiv:2303.16280 [cs.CV] (Published 2023-03-28)
Rethinking CycleGAN: Improving Quality of GANs for Unpaired Image-to-Image Translation
arXiv:2203.16015 [cs.CV] (Published 2022-03-30)
ITTR: Unpaired Image-to-Image Translation with Transformers
arXiv:1902.03782 [cs.CV] (Published 2019-02-11)
Unpaired Image-to-Image Translation with Domain Supervision