arXiv Analytics

Sign in

arXiv:1810.03234 [cs.CV]AbstractReferencesReviewsResources

A look at the topology of convolutional neural networks

Rickard Brüel Gabrielsson, Gunnar Carlsson

Published 2018-10-08Version 1

Convolutional neural networks (CNN's) are powerful and widely used tools. However, their interpretability is far from ideal. In this paper we use topological data analysis to investigate what various CNN's learn. We show that the weights of convolutional layers at depths from 1 through 13 learn simple global structures. We also demonstrate the change of the simple structures over the course of training. In particular, we define and analyze the spaces of spatial filters of convolutional layers and show the recurrence, among all networks, depths, and during training, of a simple circle consisting of rotating edges, as well as a less recurring unanticipated complex circle that combines lines, edges, and non-linear patterns. We train over a thousand CNN's on MNIST and CIFAR-10, as well as use VGG-networks pretrained on ImageNet.

Related articles: Most relevant | Search more
arXiv:2206.04979 [cs.CV] (Published 2022-06-10)
Convolutional Layers Are Not Translation Equivariant
arXiv:1706.02393 [cs.CV] (Published 2017-06-07)
ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks
arXiv:2102.11944 [cs.CV] (Published 2021-02-23)
Arguments for the Unsuitability of Convolutional Neural Networks for Non--Local Tasks