arXiv Analytics

Sign in

arXiv:2407.11921 [cs.CV]AbstractReferencesReviewsResources

IPA-NeRF: Illusory Poisoning Attack Against Neural Radiance Fields

Wenxiang Jiang, Hanwei Zhang, Shuo Zhao, Zhongwen Guo, Hao Wang

Published 2024-07-16Version 1

Neural Radiance Field (NeRF) represents a significant advancement in computer vision, offering implicit neural network-based scene representation and novel view synthesis capabilities. Its applications span diverse fields including robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, etc., some of which are considered high-risk AI applications. However, despite its widespread adoption, the robustness and security of NeRF remain largely unexplored. In this study, we contribute to this area by introducing the Illusory Poisoning Attack against Neural Radiance Fields (IPA-NeRF). This attack involves embedding a hidden backdoor view into NeRF, allowing it to produce predetermined outputs, i.e. illusory, when presented with the specified backdoor view while maintaining normal performance with standard inputs. Our attack is specifically designed to deceive users or downstream models at a particular position while ensuring that any abnormalities in NeRF remain undetectable from other viewpoints. Experimental results demonstrate the effectiveness of our Illusory Poisoning Attack, successfully presenting the desired illusory on the specified viewpoint without impacting other views. Notably, we achieve this attack by introducing small perturbations solely to the training set. The code can be found at https://github.com/jiang-wenxiang/IPA-NeRF.

Related articles: Most relevant | Search more
arXiv:2304.11448 [cs.CV] (Published 2023-04-22)
Dehazing-NeRF: Neural Radiance Fields from Hazy Images
arXiv:2403.06092 [cs.CV] (Published 2024-03-10)
Is Vanilla MLP in Neural Radiance Field Enough for Few-shot View Synthesis?
arXiv:2408.11251 [cs.CV] (Published 2024-08-21)
Irregularity Inspection using Neural Radiance Field