arXiv:1701.07953 [cs.LG]AbstractReferencesReviewsResources
The Price of Differential Privacy For Online Learning
Published 2017-01-27Version 1
We design differentially private algorithms for the problem of online linear optimization in the full information and bandit settings with optimal $\tilde{O}(\sqrt{T})$ regret bounds. In the full-information setting, our results demonstrate that $(\epsilon, \delta)$-differential privacy may be ensured for free - in particular, the regret bounds scale as $O(\sqrt{T})+\tilde{O}\big(\frac{1}{\epsilon}\log \frac{1}{\delta}\big)$. For bandit linear optimization, and as a special case, for non-stochastic multi-armed bandits, the proposed algorithm achieves a regret of $O\Big(\frac{\sqrt{T\log T}}{\epsilon}\log \frac{1}{\delta}\Big)$, while the previously best known bound was $\tilde{O}\Big(\frac{T^{\frac{3}{4}}}{\epsilon}\Big)$.