arXiv:1301.3557 [cs.LG]AbstractReferencesReviewsResources
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
Published 2013-01-16Version 1
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.
Comments: 9 pages
Related articles: Most relevant | Search more
arXiv:1912.13384 [cs.LG] (Published 2019-12-21)
Data Augmentation by AutoEncoders for Unsupervised Anomaly Detection
arXiv:2203.16481 [cs.LG] (Published 2022-03-30)
On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification
arXiv:2203.03304 [cs.LG] (Published 2022-03-07)
Regularising for invariance to data augmentation improves supervised learning