arXiv Analytics

Sign in

arXiv:2002.09917 [cs.LG]AbstractReferencesReviewsResources

Improve SGD Training via Aligning Min-batches

Xiangrui Li, Deng Pan, Xin Li, Dongxiao Zhu

Published 2020-02-23Version 1

Deep neural networks (DNNs) for supervised learning can be viewed as a pipeline of a feature extractor (i.e. last hidden layer) and a linear classifier (i.e. output layer) that is trained jointly with stochastic gradient descent (SGD). In each iteration of SGD, a mini-batch from the training data is sampled and the true gradient of the loss function is estimated as the noisy gradient calculated on this mini-batch. From the feature learning perspective, the feature extractor should be updated to learn meaningful features with respect to the entire data, and reduce the accommodation to noise in the mini-batch. With this motivation, we propose In-Training Distribution Matching (ITDM) to improve DNN training and reduce overfitting. Specifically, along with the loss function, ITDM regularizes the feature extractor by matching the moments of distributions of different mini-batches in each iteration of SGD, which is fulfilled by minimizing the maximum mean discrepancy. As such, ITDM does not assume any explicit parametric form of data distribution in the latent feature space. Extensive experiments are conducted to demonstrate the effectiveness of our proposed strategy.

Related articles: Most relevant | Search more
arXiv:1912.00789 [cs.LG] (Published 2019-12-02)
Is Discriminator a Good Feature Extractor?
arXiv:1901.09178 [cs.LG] (Published 2019-01-26)
A general model for plane-based clustering with loss function
arXiv:1903.02893 [cs.LG] (Published 2019-03-07)
Only sparsity based loss function for learning representations