{ "id": "1506.07613", "version": "v1", "published": "2015-06-25T04:56:50.000Z", "updated": "2015-06-25T04:56:50.000Z", "title": "Generalized Majorization-Minimization", "authors": [ "Sobhan Naderi Parizi", "Kun He", "Stan Sclaroff", "Pedro Felzenszwalb" ], "categories": [ "cs.CV", "cs.IT", "cs.LG", "math.IT", "stat.ML" ], "abstract": "Non-convex optimization is ubiquitous in machine learning. The Majorization-Minimization (MM) procedure systematically optimizes non-convex functions through an iterative construction and optimization of upper bounds on the objective function. The bound at each iteration is required to \\emph{touch} the objective function at the optimizer of the previous bound. We show that this touching constraint is unnecessary and overly restrictive. We generalize MM by relaxing this constraint, and propose a new framework for designing optimization algorithms, named Generalized Majorization-Minimization (G-MM). Compared to MM, G-MM is much more flexible. For instance, it can incorporate application-specific biases into the optimization procedure without changing the objective function. We derive G-MM algorithms for several latent variable models and show that they consistently outperform their MM counterparts in optimizing non-convex objectives. In particular, G-MM algorithms appear to be less sensitive to initialization.", "revisions": [ { "version": "v1", "updated": "2015-06-25T04:56:50.000Z" } ], "analyses": { "keywords": [ "generalized majorization-minimization", "objective function", "procedure systematically optimizes non-convex functions", "incorporate application-specific biases", "g-mm algorithms appear" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }