arXiv Analytics

Sign in

arXiv:2403.11787 [math.NA]AbstractReferencesReviewsResources

On the Convergence of A Data-Driven Regularized Stochastic Gradient Descent for Nonlinear Ill-Posed Problems

Zehui Zhou

Published 2024-03-18, updated 2024-09-27Version 2

Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. In this work, we analyze a new data-driven regularized stochastic gradient descent for the efficient numerical solution of a class of nonlinear ill-posed inverse problems in infinite dimensional Hilbert spaces. At each step of the iteration, the method randomly selects one equation from the nonlinear system combined with a corresponding equation from the learned system based on training data to obtain a stochastic estimate of the gradient and then performs a descent step with the estimated gradient. We prove the regularizing property of this method under the tangential cone condition and a priori parameter choice and then derive the convergence rates under the additional source condition and range invariance conditions. Several numerical experiments are provided to complement the analysis.

Related articles: Most relevant | Search more
arXiv:1105.2407 [math.NA] (Published 2011-05-12)
Convergence of Variational Regularization Methods for Imaging on Riemannian Manifolds
arXiv:1307.6919 [math.NA] (Published 2013-07-26)
Convergence of a Second Order Markov Chain
arXiv:1210.0056 [math.NA] (Published 2012-09-28, updated 2013-06-25)
Convergence and Applications of a Gossip-based Gauss-Newton Algorithm