{ "id": "1804.04503", "version": "v1", "published": "2018-04-11T02:51:07.000Z", "updated": "2018-04-11T02:51:07.000Z", "title": "When optimizing nonlinear objectives is no harder than linear objectives", "authors": [ "Daniel Alabi", "Nicole Immorlica", "Adam Tauman Kalai" ], "categories": [ "cs.LG", "cs.DS", "stat.ML" ], "abstract": "Most systems and learning algorithms optimize average performance or average loss --- one reason being computational complexity. However, many objectives of practical interest are more complex than simply average loss. Examples include balancing performance or loss with fairness across people, as well as balancing precision and recall. We prove that, from a computational perspective, fairly general families of complex objectives are not significantly harder to optimize than standard averages, by providing polynomial-time reductions, i.e., algorithms that optimize complex objectives using linear optimizers. The families of objectives included are arbitrary continuous functions of average group performances and also convex objectives. We illustrate with applications to fair machine learning, fair optimization and F1-scores.", "revisions": [ { "version": "v1", "updated": "2018-04-11T02:51:07.000Z" } ], "analyses": { "keywords": [ "optimizing nonlinear objectives", "average loss", "complex objectives", "learning algorithms optimize average performance", "average group performances" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }