arXiv Analytics

Sign in

arXiv:2005.12782 [cs.LG]AbstractReferencesReviewsResources

A Protection against the Extraction of Neural Network Models

Hervé Chabanne, Linda Guiga

Published 2020-05-26Version 1

Given oracle access to a Neural Network (NN), it is possible to extract its underlying model. We here introduce a protection by adding parasitic layers which mostly keep unchanged the underlying NN while complexifying the task of reverse-engineering. Our countermeasure relies on approximating the identity mapping with a Convolutional NN. We explain why the introduction of new parasitic layers complexifies the attacks. We report experiments regarding the performance and the accuracy of the protected NN.

Related articles: Most relevant | Search more
arXiv:1909.01838 [cs.LG] (Published 2019-09-03)
High-Fidelity Extraction of Neural Network Models
arXiv:1901.08276 [cs.LG] (Published 2019-01-24)
Traditional and Heavy-Tailed Self Regularization in Neural Network Models
arXiv:2009.10835 [cs.LG] (Published 2020-09-22)
Model-Centric and Data-Centric Aspects of Active Learning for Neural Network Models