arXiv Analytics

Sign in

arXiv:1908.00780 [stat.ML]AbstractReferencesReviewsResources

Differential Privacy for Sparse Classification Learning

Puyu Wang, Hai Zhang

Published 2019-08-02Version 1

In this paper, we present a differential privacy version of convex and nonconvex sparse classification approach. Based on alternating direction method of multiplier (ADMM) algorithm, we transform the solving of sparse problem into the multistep iteration process. Then we add exponential noise to stable steps to achieve privacy protection. By the property of the post-processing holding of differential privacy, the proposed approach satisfies the $\epsilon-$differential privacy even when the original problem is unstable. Furthermore, we present the theoretical privacy bound of the differential privacy classification algorithm. Specifically, the privacy bound of our algorithm is controlled by the algorithm iteration number, the privacy parameter, the parameter of loss function, ADMM pre-selected parameter, and the data size. Finally we apply our framework to logistic regression with $L_1$ regularizer and logistic regression with $L_{1/2}$ regularizer. Numerical studies demonstrate that our method is both effective and efficient which performs well in sensitive data analysis.

Related articles: Most relevant | Search more
arXiv:1611.08618 [stat.ML] (Published 2016-11-25)
A Benchmark and Comparison of Active Learning for Logistic Regression
arXiv:1705.07592 [stat.ML] (Published 2017-05-22)
Improved Clustering with Augmented k-means
arXiv:1708.07826 [stat.ML] (Published 2017-08-24)
Logistic Regression as Soft Perceptron Learning