{ "id": "1805.07311", "version": "v1", "published": "2018-05-18T16:21:02.000Z", "updated": "2018-05-18T16:21:02.000Z", "title": "Blended Conditional Gradients: the unconditioning of conditional gradients", "authors": [ "Gábor Braun", "Sebastian Pokutta", "Dan Tu", "Stephen Wright" ], "comment": "28 pages + 11 figures", "categories": [ "math.OC", "cs.CC", "cs.LG" ], "abstract": "We present a blended conditional gradient approach for minimizing a smooth convex function over a polytope $P$, that combines the Frank--Wolfe algorithm (also called conditional gradient) with gradient-based steps different from away steps and pairwise steps, however, still achieving linear convergence for strongly convex functions and good practical performance. Our approach retains all favorable properties of conditional gradient algorithms, most notably avoidance of projections onto $P$ and maintenance of iterates as sparse convex combinations of a limited number of extreme points of $P$. The algorithm decreases measures of optimality (primal and dual gaps) rapidly, both in the number of iterations and in wall-clock time, outperforming even the efficient \"lazified\" conditional gradient algorithms of [arXiv:1410.8816]. Nota bene the algorithm is lazified itself. We also present a streamlined algorithm when $P$ is the probability simplex.", "revisions": [ { "version": "v1", "updated": "2018-05-18T16:21:02.000Z" } ], "analyses": { "subjects": [ "68Q32", "90C52" ], "keywords": [ "conditional gradient algorithms", "smooth convex function", "blended conditional gradient approach", "sparse convex combinations", "algorithm decreases measures" ], "note": { "typesetting": "TeX", "pages": 28, "language": "en", "license": "arXiv", "status": "editable" } } }