arXiv Analytics

Sign in

arXiv:2112.00725 [cs.CV]AbstractReferencesReviewsResources

Extrapolating from a Single Image to a Thousand Classes using Distillation

Yuki M. Asano, Aaqib Saeed

Published 2021-12-01, updated 2022-01-19Version 2

What can neural networks learn about the visual world from a single image? While it obviously cannot contain the multitudes of possible objects, scenes and lighting conditions that exist - within the space of all possible 256^(3x224x224) 224-sized square images, it might still provide a strong prior for natural images. To analyze this hypothesis, we develop a framework for training neural networks from scratch using a single image by means of knowledge distillation from a supervisedly pretrained teacher. With this, we find that the answer to the above question is: 'surprisingly, a lot'. In quantitative terms, we find top-1 accuracies of 94%/74% on CIFAR-10/100, 59% on ImageNet, and by extending this method to video and audio, 77% on UCF-101 and 84% on SpeechCommands. In extensive analyses we disentangle the effect of augmentations, choice of source image and network architectures and also discover "panda neurons" in networks that have never seen a panda. This work shows that one image can be used to extrapolate to thousands of object classes and motivates a renewed research agenda on the fundamental interplay of augmentations and image.

Comments: Webpage/code: https://single-image-distill.github.io/
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1707.04682 [cs.CV] (Published 2017-07-15)
Rethinking Reprojection: Closing the Loop for Pose-aware ShapeReconstruction from a Single Image
arXiv:1705.05483 [cs.CV] (Published 2017-05-15)
WordFence: Text Detection in Natural Images with Border Awareness
arXiv:1412.6626 [cs.CV] (Published 2014-12-20)
The local low-dimensionality of natural images