{ "id": "2006.06359", "version": "v1", "published": "2020-06-11T12:21:13.000Z", "updated": "2020-06-11T12:21:13.000Z", "title": "Improved Algorithms for Convex-Concave Minimax Optimization", "authors": [ "Yuanhao Wang", "Jian Li" ], "categories": [ "cs.LG", "stat.ML" ], "abstract": "This paper studies minimax optimization problems $\\min_x \\max_y f(x,y)$, where $f(x,y)$ is $m_x$-strongly convex with respect to $x$, $m_y$-strongly concave with respect to $y$ and $(L_x,L_{xy},L_y)$-smooth. \\citet{zhang2019lower} provided the following lower bound of the gradient complexity for any first-order method: $\\Omega\\Bigl(\\sqrt{\\frac{L_x}{m_x}+\\frac{L_{xy}^2}{m_x m_y}+\\frac{L_y}{m_y}}\\ln(1/\\epsilon)\\Bigr).$ This paper proposes a new algorithm with gradient complexity upper bound $\\tilde{O}\\Bigl(\\sqrt{\\frac{L_x}{m_x}+\\frac{L\\cdot L_{xy}}{m_x m_y}+\\frac{L_y}{m_y}}\\ln\\left(1/\\epsilon\\right)\\Bigr),$ where $L=\\max\\{L_x,L_{xy},L_y\\}$. This improves over the best known upper bound $\\tilde{O}\\left(\\sqrt{\\frac{L^2}{m_x m_y}} \\ln^3\\left(1/\\epsilon\\right)\\right)$ by \\citet{lin2020near}. Our bound achieves linear convergence rate and tighter dependency on condition numbers, especially when $L_{xy}\\ll L$ (i.e., when the interaction between $x$ and $y$ is weak). Via reduction, our new bound also implies improved bounds for strongly convex-concave and convex-concave minimax optimization problems. When $f$ is quadratic, we can further improve the upper bound, which matches the lower bound up to a small sub-polynomial factor.", "revisions": [ { "version": "v1", "updated": "2020-06-11T12:21:13.000Z" } ], "analyses": { "keywords": [ "convex-concave minimax optimization", "upper bound", "bound achieves linear convergence rate", "paper studies minimax optimization problems", "gradient complexity" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }