arXiv Analytics

Sign in

arXiv:2211.15880 [cs.LG]AbstractReferencesReviewsResources

Mirror descent of Hopfield model

Hyungjoon Soh, Dongyeob Kim, Juno Hwang, Junghyo Jo

Published 2022-11-29Version 1

Mirror descent is a gradient descent method that uses a dual space of parametric models. The great idea has been developed in convex optimization, but not yet widely applied in machine learning. In this study, we provide a possible way that the mirror descent can help data-driven parameter initialization of neural networks. We adopt the Hopfield model as a prototype of neural networks, we demonstrate that the mirror descent can train the model more effectively than the usual gradient descent with random parameter initialization.

Related articles: Most relevant | Search more
arXiv:1803.01206 [cs.LG] (Published 2018-03-03)
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
arXiv:1807.04225 [cs.LG] (Published 2018-07-11)
Measuring abstract reasoning in neural networks
arXiv:1812.10386 [cs.LG] (Published 2018-12-26)
ECG Segmentation by Neural Networks: Errors and Correction