arXiv Analytics

Sign in

arXiv:1912.08421 [cs.LG]AbstractReferencesReviewsResources

Preventing Information Leakage with Neural Architecture Search

Shuang Zhang, Liyao Xiang, Congcong Li, Yixuan Wang, Zeyu Liu, Quanshi Zhang, Bo Li

Published 2019-12-18Version 1

Powered by machine learning services in the cloud, numerous learning-driven mobile applications are gaining popularity in the market. As deep learning tasks are mostly computation-intensive, it has become a trend to process raw data on devices and send the neural network features to the cloud, whereas the part of the neural network residing in the cloud completes the task to return final results. However, there is always the potential for unexpected leakage with the release of features, with which an adversary could infer a significant amount of information about the original data. To address this problem, we propose a privacy-preserving deep learning framework on top of the mobile cloud infrastructure: the trained deep neural network is tailored to prevent information leakage through features while maintaining highly accurate results. In essence, we learn the strategy to prevent leakage by modifying the trained deep neural network against a generic opponent, who infers unintended information from released features and auxiliary data, while preserving the accuracy of the model as much as possible.

Comments: 14 pages, 6 figures, submitted to Mobihoc 2020 and is under review
Categories: cs.LG, eess.SP, stat.ML
Related articles: Most relevant | Search more
arXiv:2008.08476 [cs.LG] (Published 2020-08-19)
NASCaps: A Framework for Neural Architecture Search to Optimize the Accuracy and Hardware Efficiency of Convolutional Capsule Networks
arXiv:1802.07191 [cs.LG] (Published 2018-02-11)
Neural Architecture Search with Bayesian Optimisation and Optimal Transport
arXiv:1902.07638 [cs.LG] (Published 2019-02-20)
Random Search and Reproducibility for Neural Architecture Search