arXiv Analytics

Sign in

arXiv:1801.03600 [math.OC]AbstractReferencesReviewsResources

Multi-Level Stochastic Gradient Methods for Nested Compositon Optimization

Shuoguang Yang, Mengdi Wang, Ethan X. Fang

Published 2018-01-11Version 1

Stochastic gradient methods are scalable for solving large-scale optimization problems that involve empirical expectations of loss functions. Existing results mainly apply to optimization problems where the objectives are one- or two-level expectations. In this paper, we consider the multi-level compositional optimization problem that involves compositions of multi-level component functions and nested expectations over a random path. It finds applications in risk-averse optimization and sequential planning. We propose a class of multi-level stochastic gradient methods that are motivated from the method of multi-timescale stochastic approximation. First we propose a basic $T$-level stochastic compositional gradient algorithm, establish its almost sure convergence and obtain an $n$-iteration error bound $\cO (n^{-1/2^T})$. Then we develop accelerated multi-level stochastic gradient methods by using an extrapolation-interpolation scheme to take advantage of the smoothness of individual component functions. When all component functions are smooth, we show that the convergence rate improves to $\cO(n^{-4/(7+T)})$ for general objectives and $\cO (n^{-4/(3+T)})$ for strongly convex objectives. We also provide almost sure convergence and rate of convergence results for nonconvex problems. The proposed methods and theoretical results are validated using numerical experiments.

Related articles:
arXiv:1608.01713 [math.OC] (Published 2016-08-04)
Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods
arXiv:1611.08022 [math.OC] (Published 2016-11-23)
A Stronger Convergence Result on the Proximal Incremental Aggregated Gradient Method
arXiv:2303.08549 [math.OC] (Published 2023-03-15)
A Bregman-Kaczmarz method for nonlinear systems of equations