arXiv Analytics

Sign in

arXiv:2208.14433 [cs.CV]AbstractReferencesReviewsResources

A Portable Multiscopic Camera for Novel View and Time Synthesis in Dynamic Scenes

Tianjia Zhang, Yuen-Fui Lau, Qifeng Chen

Published 2022-08-30Version 1

We present a portable multiscopic camera system with a dedicated model for novel view and time synthesis in dynamic scenes. Our goal is to render high-quality images for a dynamic scene from any viewpoint at any time using our portable multiscopic camera. To achieve such novel view and time synthesis, we develop a physical multiscopic camera equipped with five cameras to train a neural radiance field (NeRF) in both time and spatial domains for dynamic scenes. Our model maps a 6D coordinate (3D spatial position, 1D temporal coordinate, and 2D viewing direction) to view-dependent and time-varying emitted radiance and volume density. Volume rendering is applied to render a photo-realistic image at a specified camera pose and time. To improve the robustness of our physical camera, we propose a camera parameter optimization module and a temporal frame interpolation module to promote information propagation across time. We conduct experiments on both real-world and synthetic datasets to evaluate our system, and the results show that our approach outperforms alternative solutions qualitatively and quantitatively. Our code and dataset are available at https://yuenfuilau.github.io.

Related articles: Most relevant | Search more
arXiv:2303.11963 [cs.CV] (Published 2023-03-21)
NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects
arXiv:1804.04259 [cs.CV] (Published 2018-04-12)
Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation
arXiv:2209.13284 [cs.CV] (Published 2022-09-27)
Frame Interpolation for Dynamic Scenes with Implicit Flow Encoding