arXiv Analytics

Sign in

arXiv:2409.13232 [cs.LG]AbstractReferencesReviewsResources

Relationship between Uncertainty in DNNs and Adversarial Attacks

Abigail Adeniran, Adewale Adeyemo

Published 2024-09-20Version 1

Deep Neural Networks (DNNs) have achieved state of the art results and even outperformed human accuracy in many challenging tasks, leading to DNNs adoption in a variety of fields including natural language processing, pattern recognition, prediction, and control optimization. However, DNNs are accompanied by uncertainty about their results, causing them to predict an outcome that is either incorrect or outside of a certain level of confidence. These uncertainties stem from model or data constraints, which could be exacerbated by adversarial attacks. Adversarial attacks aim to provide perturbed input to DNNs, causing the DNN to make incorrect predictions or increase model uncertainty. In this review, we explore the relationship between DNN uncertainty and adversarial attacks, emphasizing how adversarial attacks might raise DNN uncertainty.

Related articles: Most relevant | Search more
arXiv:1708.01911 [cs.LG] (Published 2017-08-06)
Training of Deep Neural Networks based on Distance Measures using RMSProp
arXiv:1605.09593 [cs.LG] (Published 2016-05-31)
Controlling Exploration Improves Training for Deep Neural Networks
arXiv:1711.02114 [cs.LG] (Published 2017-11-06)
Bounding and Counting Linear Regions of Deep Neural Networks