arXiv Analytics

Sign in

arXiv:1808.00087 [cs.LG]AbstractReferencesReviewsResources

Subsampled Rényi Differential Privacy and Analytical Moments Accountant

Yu-Xiang Wang, Borja Balle, Shiva Kasiviswanathan

Published 2018-07-31Version 1

We study the problem of subsampling in differential privacy (DP), a question that is the centerpiece behind many successful differentially private machine learning algorithms. Specifically, we provide a tight upper bound on the R\'enyi Differential Privacy (RDP) (Mironov, 2017) parameters for algorithms that: (1) subsample the dataset, and then (2) apply a randomized mechanism M to the subsample, in terms of the RDP parameters of M and the subsampling probability parameter. This result generalizes the classic subsampling-based "privacy amplification" property of $(\epsilon,\delta)$-differential privacy that applies to only one fixed pair of $(\epsilon,\delta)$, to a stronger version that exploits properties of each specific randomized algorithm and satisfies an entire family of $(\epsilon(\delta),\delta)$-differential privacy for all $\delta\in [0,1]$. Our experiments confirm the advantage of using our techniques over keeping track of $(\epsilon,\delta)$ directly, especially in the setting where we need to compose many rounds of data access.

Related articles: Most relevant | Search more
arXiv:2107.04265 [cs.LG] (Published 2021-07-09)
Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation
arXiv:2305.05900 [cs.LG] (Published 2023-05-10)
DPMLBench: Holistic Evaluation of Differentially Private Machine Learning
Chengkun Wei et al.
arXiv:2410.22673 [cs.LG] (Published 2024-10-30)
Calibrating Practical Privacy Risks for Differentially Private Machine Learning