arXiv Analytics

Sign in

arXiv:2307.00309 [cs.CV]AbstractReferencesReviewsResources

Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey

Hanieh Naderi, Ivan V. Bajić

Published 2023-07-01Version 1

Deep learning has successfully solved a wide range of tasks in 2D vision as a dominant AI technique. Recently, deep learning on 3D point clouds is becoming increasingly popular for addressing various tasks in this field. Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks. These attacks are imperceptible to the human eye but can easily fool deep neural networks in the testing and deployment stage. To encourage future research, this survey summarizes the current progress on adversarial attack and defense techniques on point cloud classification. This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes the adversarial example generation methods in recent years. Besides, it classifies defense strategies as input transformation, data optimization, and deep model modification. Finally, it presents several challenging issues and future research directions in this domain.

Related articles: Most relevant | Search more
arXiv:2005.10987 [cs.CV] (Published 2020-05-22)
Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning
arXiv:1909.09263 [cs.CV] (Published 2019-09-19)
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation
arXiv:2107.06501 [cs.CV] (Published 2021-07-14)
AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning