arXiv Analytics

Sign in

arXiv:2012.07828 [cs.LG]AbstractReferencesReviewsResources

Robustness Threats of Differential Privacy

Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets

Published 2020-12-14, updated 2021-08-23Version 2

Differential privacy (DP) is a gold-standard concept of measuring and guaranteeing privacy in data analysis. It is well-known that the cost of adding DP to deep learning model is its accuracy. However, it remains unclear how it affects robustness of the model. Standard neural networks are not robust to different input perturbations: either adversarial attacks or common corruptions. In this paper, we empirically observe an interesting trade-off between privacy and robustness of neural networks. We experimentally demonstrate that networks, trained with DP, in some settings might be even more vulnerable in comparison to non-private versions. To explore this, we extensively study different robustness measurements, including FGSM and PGD adversaries, distance to linear decision boundaries, curvature profile, and performance on a corrupted dataset. Finally, we study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect (decrease and increase) the robustness of the model.

Comments: NeurIPS'20 Privacy-Preserving Machine Learning Workshop
Categories: cs.LG, cs.CR
Related articles: Most relevant | Search more
arXiv:2007.11524 [cs.LG] (Published 2020-07-22)
Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising
arXiv:2102.08166 [cs.LG] (Published 2021-02-16)
Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?
arXiv:1905.12101 [cs.LG] (Published 2019-05-28)
Differential Privacy Has Disparate Impact on Model Accuracy