{ "id": "1911.10291", "version": "v1", "published": "2019-11-23T01:15:32.000Z", "updated": "2019-11-23T01:15:32.000Z", "title": "Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference", "authors": [ "Wei-An Lin", "Yogesh Balaji", "Pouya Samangouei", "Rama Chellappa" ], "categories": [ "cs.CV" ], "abstract": "Inferring the latent variable generating a given test sample is a challenging problem in Generative Adversarial Networks (GANs). In this paper, we propose InvGAN - a novel framework for solving the inference problem in GANs, which involves training an encoder network capable of inverting a pre-trained generator network without access to any training data. Under mild assumptions, we theoretically show that using InvGAN, we can approximately invert the generations of any latent code of a trained GAN model. Furthermore, we empirically demonstrate the superiority of our inference scheme by quantitative and qualitative comparisons with other methods that perform a similar task. We also show the effectiveness of our framework in the problem of adversarial defenses where InvGAN can successfully be used as a projection-based defense mechanism. Additionally, we show how InvGAN can be used to implement reparameterization white-box attacks on projection-based defense mechanisms. Experimental validation on several benchmark datasets demonstrate the efficacy of our method in achieving improved performance on several white-box and black-box attacks. Our code is available at https://github.com/yogeshbalaji/InvGAN.", "revisions": [ { "version": "v1", "updated": "2019-11-23T01:15:32.000Z" } ], "analyses": { "keywords": [ "generative adversarial networks", "model-based approximate inversion", "secure inference", "projection-based defense mechanism", "implement reparameterization white-box attacks" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }