arXiv Analytics

Sign in

arXiv:2202.03051 [cs.LG]AbstractReferencesReviewsResources

Using Partial Monotonicity in Submodular Maximization

Loay Mualem, Moran Feldman

Published 2022-02-07, updated 2022-05-17Version 2

Over the last two decades, submodular function maximization has been the workhorse of many discrete optimization problems in machine learning applications. Traditionally, the study of submodular functions was based on binary function properties. However, such properties have an inherit weakness, namely, if an algorithm assumes functions that have a particular property, then it provides no guarantee for functions that violate this property, even when the violation is very slight. Therefore, recent works began to consider continuous versions of function properties. Probably the most significant among these (so far) are the submodularity ratio and the curvature, which were studied extensively together and separately. The monotonicity property of set functions plays a central role in submodular maximization. Nevertheless, and despite all the above works, no continuous version of this property has been suggested to date (as far as we know). This is unfortunate since submoduar functions that are almost monotone often arise in machine learning applications. In this work we fill this gap by defining the monotonicity ratio, which is a continues version of the monotonicity property. We then show that for many standard submodular maximization algorithms one can prove new approximation guarantees that depend on the monotonicity ratio; leading to improved approximation ratios for the common machine learning applications of movie recommendation, quadratic programming and image summarization.

Related articles: Most relevant | Search more
arXiv:2006.00038 [cs.LG] (Published 2020-05-29)
Quasi-orthonormal Encoding for Machine Learning Applications
arXiv:1212.1100 [cs.LG] (Published 2012-12-05)
Making Early Predictions of the Accuracy of Machine Learning Applications
arXiv:2311.13883 [cs.LG] (Published 2023-11-23)
Leveraging Optimal Transport via Projections on Subspaces for Machine Learning Applications