arXiv Analytics

Sign in

arXiv:2303.13817 [cs.CV]AbstractReferencesReviewsResources

ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for Neural Radiance Field

Zhe Jun Tang, Tat-Jen Cham, Haiyu Zhao

Published 2023-03-24Version 1

Neural Radiance Field (NeRF) is a popular method in representing 3D scenes by optimising a continuous volumetric scene function. Its large success which lies in applying volumetric rendering (VR) is also its Achilles' heel in producing view-dependent effects. As a consequence, glossy and transparent surfaces often appear murky. A remedy to reduce these artefacts is to constrain this VR equation by excluding volumes with back-facing normal. While this approach has some success in rendering glossy surfaces, translucent objects are still poorly represented. In this paper, we present an alternative to the physics-based VR approach by introducing a self-attention-based framework on volumes along a ray. In addition, inspired by modern game engines which utilise Light Probes to store local lighting passing through the scene, we incorporate Learnable Embeddings to capture view dependent effects within the scene. Our method, which we call ABLE-NeRF, significantly reduces `blurry' glossy surfaces in rendering and produces realistic translucent surfaces which lack in prior art. In the Blender dataset, ABLE-NeRF achieves SOTA results and surpasses Ref-NeRF in all 3 image quality metrics PSNR, SSIM, LPIPS.

Comments: IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) 2023
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2301.04075 [cs.CV] (Published 2023-01-10)
Benchmarking Robustness in Neural Radiance Fields
arXiv:2311.16945 [cs.CV] (Published 2023-11-28)
UC-NeRF: Neural Radiance Field for Under-Calibrated multi-view cameras in autonomous driving
Kai Cheng et al.
arXiv:2407.11921 [cs.CV] (Published 2024-07-16)
IPA-NeRF: Illusory Poisoning Attack Against Neural Radiance Fields