arXiv Analytics

Sign in

arXiv:1506.08230 [cs.LG]AbstractReferencesReviewsResources

Scale-invariant learning and convolutional networks

Soumith Chintala, Marc'Aurelio Ranzato, Arthur Szlam, Yuandong Tian, Mark Tygert, Wojciech Zaremba

Published 2015-06-26Version 1

The conventional classification schemes -- notably multinomial logistic regression -- used in conjunction with convolutional networks (convnets) are classical in statistics, designed without consideration for the usual coupling with convnets, stochastic gradient descent, and backpropagation. In the specific application to supervised learning for convnets, a simple scale-invariant classification stage turns out to be more robust than multinomial logistic regression, appears to result in slightly lower errors on several standard test sets, has similar computational costs, and features precise control over the actual rate of learning. "Scale-invariant" means that multiplying the input values by any nonzero scalar leaves the output unchanged.

Related articles: Most relevant | Search more
arXiv:1705.02302 [cs.LG] (Published 2017-05-05)
Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions
arXiv:2408.16686 [cs.LG] (Published 2024-08-29)
CW-CNN & CW-AN: Convolutional Networks and Attention Networks for CW-Complexes
arXiv:1509.09292 [cs.LG] (Published 2015-09-30)
Convolutional Networks on Graphs for Learning Molecular Fingerprints