arXiv Analytics

Sign in

arXiv:1604.04713 [math.OC]AbstractReferencesReviewsResources

Stochastic Optimization Algorithms for Convex Optimization with Fixed Point Constraints

Hideaki Iiduka

Published 2016-04-16Version 1

The main objective of this paper is to solve a stochastic programming problem for which the objective function is given in the form of the expectation of convex functions and the constraint set is defined by the intersection of fixed point sets of nonexpansive mappings on a real Hilbert space. This setting of the fixed point constraints enables consideration of the case in which the projection onto each of the constraint sets cannot be computed efficiently. Two optimization algorithms are proposed for solving the problem. Both use a convex function and a nonexpansive mapping determined by a certain probabilistic process at each iteration. One algorithm blends a stochastic gradient method with the Halpern fixed point algorithm, which is a useful fixed point algorithm. The other algorithm is based on a stochastic proximal point algorithm and the Halpern fixed point algorithm; it can be applied to nonsmooth convex optimization. Convergence analysis for the two algorithms indicated that, under certain assumptions, any weak sequential cluster point of the sequence generated by each of the algorithms almost surely belongs to the solution set of the problem. Convergence rate analysis for the two algorithms illustrated their efficiency. The numerical results of concrete convex optimization over fixed point sets demonstrated their effectiveness.

Related articles: Most relevant | Search more
arXiv:2001.06511 [math.OC] (Published 2020-01-17)
A perturbation view of level-set methods for convex optimization
arXiv:1008.2814 [math.OC] (Published 2010-08-17, updated 2013-12-02)
Convex optimization for the planted k-disjoint-clique problem
arXiv:2107.08011 [math.OC] (Published 2021-07-16)
Adaptive first-order methods revisited: Convex optimization without Lipschitz requirements