arXiv Analytics

Sign in

arXiv:1812.06825 [cs.LG]AbstractReferencesReviewsResources

Differentially Private Empirical Risk Minimization in Non-interactive Local Model via Polynomial of Inner Product Approximation

Di Wang, Adam Smith, Jinhui Xu

Published 2018-12-17Version 1

In this paper, we study the Empirical Risk Minimization problem in the non-interactive Local Differential Privacy (LDP) model. First, we show that for the hinge loss function, there is an $(\epsilon, \delta)$-LDP algorithm whose sample complexity for achieving an error of $\alpha$ is only linear in the dimensionality $p$ and quasi-polynomial in other terms. Then, we extend the result to any $1$-Lipschitz generalized linear convex loss functions by showing that every such function can be approximated by a linear combination of hinge loss functions and some linear functions. Finally, we apply our technique to the Euclidean median problem and show that its sample complexity needs only to be quasi-polynomial in $p$, which is the first result with a sub-exponential sample complexity in $p$ for non-generalized linear loss functions. Our results are based on a technique, called polynomial of inner product approximation, which may be applicable to other problems.

Comments: To appear in Algorithmic Learning Theory 2019 (ALT 2019), draft version
Categories: cs.LG, cs.CR, stat.ML
Related articles: Most relevant | Search more
arXiv:1802.05251 [cs.LG] (Published 2018-02-14)
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
arXiv:1905.04873 [cs.LG] (Published 2019-05-13)
Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms
arXiv:0912.0071 [cs.LG] (Published 2009-12-01, updated 2011-02-16)
Differentially Private Empirical Risk Minimization