arXiv Analytics

Sign in

arXiv:2002.11881 [cs.CV]AbstractReferencesReviewsResources

Defense-PointNet: Protecting PointNet Against Adversarial Attacks

Yu Zhang, Gongbo Liang, Tawfiq Salem, Nathan Jacobs

Published 2020-02-27Version 1

Despite remarkable performance across a broad range of tasks, neural networks have been shown to be vulnerable to adversarial attacks. Many works focus on adversarial attacks and defenses on 2D images, but few focus on 3D point clouds. In this paper, our goal is to enhance the adversarial robustness of PointNet, which is one of the most widely used models for 3D point clouds. We apply the fast gradient sign attack method (FGSM) on 3D point clouds and find that FGSM can be used to generate not only adversarial images but also adversarial point clouds. To minimize the vulnerability of PointNet to adversarial attacks, we propose Defense-PointNet. We compare our model with two baseline approaches and show that Defense-PointNet significantly improves the robustness of the network against adversarial samples.

Comments: Accepted by IEEE International Conference on Big Data (BigData) Workshop: The Next Frontier of Big Data From LiDAR, 2019
Categories: cs.CV, cs.LG, eess.IV
Related articles: Most relevant | Search more
arXiv:2102.12839 [cs.CV] (Published 2021-02-25)
A deep perceptual metric for 3D point clouds
arXiv:1707.02392 [cs.CV] (Published 2017-07-08)
Representation Learning and Adversarial Generation of 3D Point Clouds
arXiv:1806.01411 [cs.CV] (Published 2018-06-04)
Learning Scene Flow in 3D Point Clouds