arXiv Analytics

Sign in

arXiv:2011.03043 [cs.LG]AbstractReferencesReviewsResources

Identifying and interpreting tuning dimensions in deep networks

Nolan S. Dey, J. Eric Taylor, Bryan P. Tripp, Alexander Wong, Graham W. Taylor

Published 2020-11-05Version 1

In neuroscience, a tuning dimension is a stimulus attribute that accounts for much of the activation variance of a group of neurons. These are commonly used to decipher the responses of such groups. While researchers have attempted to manually identify an analogue to these tuning dimensions in deep neural networks, we are unaware of an automatic way to discover them. This work contributes an unsupervised framework for identifying and interpreting "tuning dimensions" in deep networks. Our method correctly identifies the tuning dimensions of a synthetic Gabor filter bank and tuning dimensions of the first two layers of InceptionV1 trained on ImageNet.

Comments: 14 pages, 12 figures, Shared Visual Representations in Human & Machine Intelligence NeurIPS Workshop 2020
Categories: cs.LG, cs.AI, cs.CV
Related articles: Most relevant | Search more
arXiv:1902.02366 [cs.LG] (Published 2019-02-06)
Negative eigenvalues of the Hessian in deep neural networks
arXiv:1907.08475 [cs.LG] (Published 2019-07-19)
Representational Capacity of Deep Neural Networks -- A Computing Study
arXiv:1905.09680 [cs.LG] (Published 2019-05-23)
DEEP-BO for Hyperparameter Optimization of Deep Networks