arXiv Analytics

Sign in

arXiv:1808.06651 [cs.LG]AbstractReferencesReviewsResources

Privacy Amplification by Iteration

Vitaly Feldman, Ilya Mironov, Kunal Talwar, Abhradeep Thakurta

Published 2018-08-20Version 1

Many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration. Analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm. This is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results. In this work, we demonstrate that for contractive iterations, not releasing the intermediate results strongly amplifies the privacy guarantees. We describe several applications of this new analysis technique to solving convex optimization problems via noisy stochastic gradient descent. For example, we demonstrate that a relatively small number of non-private data points from the same distribution can be used to close the gap between private and non-private convex optimization. In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied.

Comments: Extended abstract appears in Foundations of Computer Science (FOCS) 2018
Categories: cs.LG, cs.CR, cs.DS, stat.ML
Related articles: Most relevant | Search more
arXiv:1901.09136 [cs.LG] (Published 2019-01-26)
Graphical-model based estimation and inference for differential privacy
arXiv:2007.11524 [cs.LG] (Published 2020-07-22)
Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising
arXiv:2106.00474 [cs.LG] (Published 2021-06-01)
Gaussian Processes with Differential Privacy