arXiv Analytics

Sign in

arXiv:1502.03537 [cs.LG]AbstractReferencesReviewsResources

Convergence of gradient based pre-training in Denoising autoencoders

Vamsi K Ithapu, Sathya Ravi, Vikas Singh

Published 2015-02-12Version 1

The success of deep architectures is at least in part attributed to the layer-by-layer unsupervised pre-training that initializes the network. Various papers have reported extensive empirical analysis focusing on the design and implementation of good pre-training procedures. However, an understanding pertaining to the consistency of parameter estimates, the convergence of learning procedures and the sample size estimates is still unavailable in the literature. In this work, we study pre-training in classical and distributed denoising autoencoders with these goals in mind. We show that the gradient converges at the rate of $\frac{1}{\sqrt{N}}$ and has a sub-linear dependence on the size of the autoencoder network. In a distributed setting where disjoint sections of the whole network are pre-trained synchronously, we show that the convergence improves by at least $\tau^{3/4}$, where $\tau$ corresponds to the size of the sections. We provide a broad set of experiments to empirically evaluate the suggested behavior.

Related articles: Most relevant | Search more
arXiv:1811.09358 [cs.LG] (Published 2018-11-23)
A Sufficient Condition for Convergences of Adam and RMSProp
arXiv:2109.03194 [cs.LG] (Published 2021-09-07)
On the Convergence of Decentralized Adaptive Gradient Methods
arXiv:1810.00122 [cs.LG] (Published 2018-09-29)
On the Convergence and Robustness of Batch Normalization