arXiv Analytics

Sign in

arXiv:2106.14406 [cs.LG]AbstractReferencesReviewsResources

Poisoning the Search Space in Neural Architecture Search

Robert Wu, Nayan Saxena, Rohan Jain

Published 2021-06-28Version 1

Deep learning has proven to be a highly effective problem-solving tool for object detection and image segmentation across various domains such as healthcare and autonomous driving. At the heart of this performance lies neural architecture design which relies heavily on domain knowledge and prior experience on the researchers' behalf. More recently, this process of finding the most optimal architectures, given an initial search space of possible operations, was automated by Neural Architecture Search (NAS). In this paper, we evaluate the robustness of one such algorithm known as Efficient NAS (ENAS) against data agnostic poisoning attacks on the original search space with carefully designed ineffective operations. By evaluating algorithm performance on the CIFAR-10 dataset, we empirically demonstrate how our novel search space poisoning (SSP) approach and multiple-instance poisoning attacks exploit design flaws in the ENAS controller to result in inflated prediction error rates for child networks. Our results provide insights into the challenges to surmount in using NAS for more adversarially robust architecture search.

Comments: All authors contributed equally. Appears in AdvML Workshop @ ICML2021: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning
Categories: cs.LG, cs.CR, cs.NE, stat.ML
Related articles: Most relevant | Search more
arXiv:1908.09942 [cs.LG] (Published 2019-08-26)
On the Bounds of Function Approximations
arXiv:1909.03184 [cs.LG] (Published 2019-09-07)
Auto-GNN: Neural Architecture Search of Graph Neural Networks
arXiv:2010.08219 [cs.LG] (Published 2020-10-16)
How Does Supernet Help in Neural Architecture Search?