arXiv Analytics

Sign in

arXiv:1902.01878 [cs.LG]AbstractReferencesReviewsResources

Disguised-Nets: Image Disguising for Privacy-preserving Deep Learning

Sagar Sharma, Keke Chen

Published 2019-02-05Version 1

Due to the high training costs of deep learning, model developers often rent cloud GPU servers to achieve better efficiency. However, this practice raises privacy concerns. An adversarial party may be interested in 1) personal identifiable information encoded in the training data and the learned models, 2) misusing the sensitive models for its own benefits, or 3) launching model inversion (MIA) and generative adversarial network (GAN) attacks to reconstruct replicas of training data (e.g., sensitive images). Learning from encrypted data seems impractical due to the large training data and expensive learning algorithms, while differential-privacy based approaches have to make significant trade-offs between privacy and model quality. We investigate the use of image disguising techniques to protect both data and model privacy. Our preliminary results show that with block-wise permutation and transformations, surprisingly, disguised images still give reasonably well performing deep neural networks (DNN). The disguised images are also resilient to the deep-learning enhanced visual discrimination attack and provide an extra layer of protection from MIA and GAN attacks.

Related articles: Most relevant | Search more
arXiv:2103.03399 [cs.LG] (Published 2021-03-05)
Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data
arXiv:2107.12342 [cs.LG] (Published 2021-07-26)
Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning
arXiv:1812.04513 [cs.LG] (Published 2018-12-11)
The Impact of Quantity of Training Data on Recognition of Eating Gestures