arXiv Analytics

Sign in

arXiv:2101.02689 [stat.ML]AbstractReferencesReviewsResources

The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks

Arno Blaas, Stephen J. Roberts

Published 2021-01-07Version 1

It is desirable, and often a necessity, for machine learning models to be robust against adversarial attacks. This is particularly true for Bayesian models, as they are well-suited for safety-critical applications, in which adversarial attacks can have catastrophic outcomes. In this work, we take a deeper look at the adversarial robustness of Bayesian Neural Networks (BNNs). In particular, we consider whether the adversarial robustness of a BNN can be increased by model choices, particularly the Lipschitz continuity induced by the prior. Conducting in-depth analysis on the case of i.i.d., zero-mean Gaussian priors and posteriors approximated via mean-field variational inference, we find evidence that adversarial robustness is indeed sensitive to the prior variance.

Comments: 4 pages, 2 tables, AAAI 2021 Workshop Towards Robust, Secure and Efficient Machine Learning
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:1902.02603 [stat.ML] (Published 2019-02-07)
Radial and Directional Posteriors for Bayesian Neural Networks
arXiv:2501.11773 [stat.ML] (Published 2025-01-20)
Can Bayesian Neural Networks Make Confident Predictions?
arXiv:2309.16314 [stat.ML] (Published 2023-09-28)
A Primer on Bayesian Neural Networks: Review and Debates