arXiv Analytics

Sign in

arXiv:2406.06527 [cs.CV]AbstractReferencesReviewsResources

IllumiNeRF: 3D Relighting Without Inverse Rendering

Xiaoming Zhao, Pratul P. Srinivasan, Dor Verbin, Keunhong Park, Ricardo Martin Brualla, Philipp Henzler

Published 2024-06-10, updated 2024-11-01Version 2

Existing methods for relightable view synthesis -- using a set of images of an object under unknown lighting to recover a 3D representation that can be rendered from novel viewpoints under a target illumination -- are based on inverse rendering, and attempt to disentangle the object geometry, materials, and lighting that explain the input images. Furthermore, this typically involves optimization through differentiable Monte Carlo rendering, which is brittle and computationally-expensive. In this work, we propose a simpler approach: we first relight each input image using an image diffusion model conditioned on target environment lighting and estimated object geometry. We then reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting. We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks. Please see our project page at https://illuminerf.github.io/.

Comments: NeurIPS 2024; v2 (for camera-ready) added single-GPU results and discussions on Stanford-ORB illuminations; Project page: https://illuminerf.github.io/
Categories: cs.CV, cs.AI, cs.GR
Related articles: Most relevant | Search more
arXiv:1905.03556 [cs.CV] (Published 2019-05-09)
Cycle-IR: Deep Cyclic Image Retargeting
arXiv:1211.2881 [cs.CV] (Published 2012-11-13, updated 2012-11-28)
Deep Attribute Networks
arXiv:2102.08078 [cs.CV] (Published 2021-02-16)
Restore from Restored: Single-image Inpainting