arXiv Analytics

Sign in

arXiv:2211.10708 [cs.LG]AbstractReferencesReviewsResources

A Survey on Differential Privacy with Machine Learning and Future Outlook

Samah Baraheem, Zhongmei Yao

Published 2022-11-19Version 1

Nowadays, machine learning models and applications have become increasingly pervasive. With this rapid increase in the development and employment of machine learning models, a concern regarding privacy has risen. Thus, there is a legitimate need to protect the data from leaking and from any attacks. One of the strongest and most prevalent privacy models that can be used to protect machine learning models from any attacks and vulnerabilities is differential privacy (DP). DP is strict and rigid definition of privacy, where it can guarantee that an adversary is not capable to reliably predict if a specific participant is included in the dataset or not. It works by injecting a noise to the data whether to the inputs, the outputs, the ground truth labels, the objective functions, or even to the gradients to alleviate the privacy issue and protect the data. To this end, this survey paper presents different differentially private machine learning algorithms categorized into two main categories (traditional machine learning models vs. deep learning models). Moreover, future research directions for differential privacy with machine learning algorithms are outlined.

Related articles: Most relevant | Search more
arXiv:1412.7584 [cs.LG] (Published 2014-12-24)
Differential Privacy and Machine Learning: a Survey and Review
arXiv:2012.07828 [cs.LG] (Published 2020-12-14, updated 2021-08-23)
Robustness Threats of Differential Privacy
arXiv:2211.00734 [cs.LG] (Published 2022-11-01)
On the Interaction Between Differential Privacy and Gradient Compression in Deep Learning