arXiv:1611.05760 [cs.CV]AbstractReferencesReviewsResources
Examining the Impact of Blur on Recognition by Convolutional Networks
Igor Vasiljevic, Ayan Chakrabarti, Gregory Shakhnarovich
Published 2016-11-17Version 1
State-of-the-art algorithms for semantic visual tasks---such as image classification and semantic segmentation---are based on the use of convolutional neural networks. These networks are commonly trained, and evaluated, on large annotated datasets of high-quality images that are free of artifacts. In this paper, we investigate the effect of one such artifact that is quite common in natural capture settings---blur. We show that standard pre-trained network models suffer a significant degradation in performance when applied to blurred images. We investigate the extent to which this degradation is due to the mismatch between training and input image statistics. Specifically, we find that fine-tuning a pre-trained model with blurred images added to the training set allows it to regain much of the lost accuracy. By considering different combinations of sharp and blurred images in the training set, we characterize how much degradation is caused by loss of information, and how much by the uncertainty of not knowing the nature and magnitude of blur. We find that by fine-tuning on a diverse mix of blurred images, convolutional neural networks can in fact learn to generate a blur invariant representation in their hidden layers. Broadly, our results provide practitioners with useful insights for developing vision systems that perform reliably on real world images affected by blur.