arXiv Analytics

Sign in

arXiv:1507.02438 [cs.CV]AbstractReferencesReviewsResources

Generalized Video Deblurring for Dynamic Scenes

Tae Hyun Kim, Kyoung Mu Lee

Published 2015-07-09Version 1

Several state-of-the-art video deblurring methods are based on a strong assumption that the captured scenes are static. These methods fail to deblur blurry videos in dynamic scenes. We propose a video deblurring method to deal with general blurs inherent in dynamic scenes, contrary to other methods. To handle locally varying and general blurs caused by various sources, such as camera shake, moving objects, and depth variation in a scene, we approximate pixel-wise kernel with bidirectional optical flows. Therefore, we propose a single energy model that simultaneously estimates optical flows and latent frames to solve our deblurring problem. We also provide a framework and efficient solvers to optimize the energy model. By minimizing the proposed energy function, we achieve significant improvements in removing blurs and estimating accurate optical flows in blurry frames. Extensive experimental results demonstrate the superiority of the proposed method in real and challenging videos that state-of-the-art methods fail in either deblurring or optical flow estimation.

Related articles: Most relevant | Search more
arXiv:1308.0890 [cs.CV] (Published 2013-08-05)
Head Gesture Recognition using Optical Flow based Classification with Reinforcement of GMM based Background Subtraction
arXiv:2301.00411 [cs.CV] (Published 2023-01-01)
Detachable Novel Views Synthesis of Dynamic Scenes Using Distribution-Driven Neural Radiance Fields
arXiv:1201.4895 [cs.CV] (Published 2012-01-23, updated 2013-06-26)
Compressive Acquisition of Dynamic Scenes