arXiv Analytics

Sign in

arXiv:2210.01602 [cs.CV]AbstractReferencesReviewsResources

Self-improving Multiplane-to-layer Images for Novel View Synthesis

Pavel Solovev, Taras Khakhulin, Denis Korzhenkov

Published 2022-10-04Version 1

We present a new method for lightweight novel-view synthesis that generalizes to an arbitrary forward-facing scene. Recent approaches are computationally expensive, require per-scene optimization, or produce a memory-expensive representation. We start by representing the scene with a set of fronto-parallel semitransparent planes and afterward convert them to deformable layers in an end-to-end manner. Additionally, we employ a feed-forward refinement procedure that corrects the estimated representation by aggregating information from input views. Our method does not require fine-tuning when a new scene is processed and can handle an arbitrary number of views without restrictions. Experimental results show that our approach surpasses recent models in terms of common metrics and human evaluation, with the noticeable advantage in inference speed and compactness of the inferred layered geometry, see https://samsunglabs.github.io/MLI

Related articles: Most relevant | Search more
arXiv:2407.08280 [cs.CV] (Published 2024-07-11)
WayveScenes101: A Dataset and Benchmark for Novel View Synthesis in Autonomous Driving
Jannik Zürn et al.
arXiv:2304.11161 [cs.CV] (Published 2023-04-02)
altiro3D: Scene representation from single image and novel view synthesis
arXiv:2107.06812 [cs.CV] (Published 2021-07-14)
Deep Learning based Novel View Synthesis