arXiv Analytics

Sign in

arXiv:1902.06101 [math.OC]AbstractReferencesReviewsResources

On Privacy-preserving Decentralized Optimization through Alternating Direction Method of Multipliers

Hanshen Xiao, Ye Yu, Srini Devadas

Published 2019-02-16Version 1

Privacy concerns with sensitive data in machine learning are receiving increasing attention. In this paper, we study privacy-preserving distributed learning under the framework of Alternating Direction Method of Multipliers (ADMM). While secure distributed learning has been previously exploited in cryptographic or non-cryptographic (noise perturbation) approaches, it comes at a cost of either prohibitive computation overhead or a heavy loss of accuracy. Moreover, convergence in noise perturbation is hardly explored in existing privacy-preserving ADMM schemes. In this work, we propose two modified private ADMM schemes in the scenario of peer-to-peer semi-honest agents: First, for bounded colluding agents, we show that with merely linear secret sharing, information-theoretically private distributed optimization can be achieved. Second, using the notion of differential privacy, we propose first-order approximation based ADMM schemes with random parameters. We prove that the proposed private ADMM schemes can be implemented with a linear convergence rate and with a sharpened privacy loss bound in relation to prior work. Finally, we provide experimental results to support the theory.

Related articles: Most relevant | Search more
arXiv:1407.7400 [math.OC] (Published 2014-07-28, updated 2015-03-05)
Self Equivalence of the Alternating Direction Method of Multipliers
arXiv:1906.10114 [math.OC] (Published 2019-06-24)
Trajectory of Alternating Direction Method of Multipliers and Adaptive Acceleration
arXiv:2009.03482 [math.OC] (Published 2020-09-08)
Alternating Direction Method of Multipliers for Quantization