arXiv Analytics

Sign in

arXiv:2007.14166 [cs.LG]AbstractReferencesReviewsResources

A Comparison of Optimization Algorithms for Deep Learning

Derya Soydaner

Published 2020-07-28Version 1

In recent years, we have witnessed the rise of deep learning. Deep neural networks have proved their success in many areas. However, the optimization of these networks has become more difficult as neural networks going deeper and datasets becoming bigger. Therefore, more advanced optimization algorithms have been proposed over the past years. In this study, widely used optimization algorithms for deep learning are examined in detail. To this end, these algorithms called adaptive gradient methods are implemented for both supervised and unsupervised tasks. The behaviour of the algorithms during training and results on four image datasets, namely, MNIST, CIFAR-10, Kaggle Flowers and Labeled Faces in the Wild are compared by pointing out their differences against basic optimization algorithms.

Related articles: Most relevant | Search more
arXiv:2010.09458 [cs.LG] (Published 2020-10-15)
Review and Comparison of Commonly Used Activation Functions for Deep Neural Networks
arXiv:1504.06825 [cs.LG] (Published 2015-04-26)
Comparison of Training Methods for Deep Neural Networks
arXiv:2307.02973 [cs.LG] (Published 2023-07-06)
Pruning vs Quantization: Which is Better?