{ "id": "1307.5438", "version": "v3", "published": "2013-07-20T16:40:46.000Z", "updated": "2014-10-05T04:20:27.000Z", "title": "Towards Distribution-Free Multi-Armed Bandits with Combinatorial Strategies", "authors": [ "Xiang-yang Li", "Yaqin Zhou" ], "categories": [ "cs.LG" ], "abstract": "In this paper we study a generalized version of classical multi-armed bandits (MABs) problem by allowing for arbitrary constraints on constituent bandits at each decision point. The motivation of this study comes from many situations that involve repeatedly making choices subject to arbitrary constraints in an uncertain environment: for instance, regularly deciding which advertisements to display online in order to gain high click-through-rate without knowing user preferences, or what route to drive home each day under uncertain weather and traffic conditions. Assume that there are $K$ unknown random variables (RVs), i.e., arms, each evolving as an \\emph{i.i.d} stochastic process over time. At each decision epoch, we select a strategy, i.e., a subset of RVs, subject to arbitrary constraints on constituent RVs. We then gain a reward that is a linear combination of observations on selected RVs. The performance of prior results for this problem heavily depends on the distribution of strategies generated by corresponding learning policy. For example, if the reward-difference between the best and second best strategy approaches zero, prior result may lead to arbitrarily large regret. Meanwhile, when there are exponential number of possible strategies at each decision point, naive extension of a prior distribution-free policy would cause poor performance in terms of regret, computation and space complexity. To this end, we propose an efficient Distribution-Free Learning (DFL) policy that achieves zero regret, regardless of the probability distribution of the resultant strategies. Our learning policy has both $O(K)$ time complexity and $O(K)$ space complexity. In successive generations, we show that even if finding the optimal strategy at each decision point is NP-hard, our policy still allows for approximated solutions while retaining near zero-regret.", "revisions": [ { "version": "v2", "updated": "2014-05-11T03:45:24.000Z", "title": "Multi-Armed Bandits With Combinatorial Strategies Under Stochastic Bandits", "abstract": "We consider the following linearly combinatorial multi-armed bandits (MABs) problem. In a discrete time system, there are $K$ unknown random variables (RVs), i.e., arms, each evolving as an i.i.d stochastic process over time. At each time slot, we select a set of $N$ ($N \\leq K$) RVs, i.e., strategy, subject to an arbitrarily constraint. We then gain a reward that is a linear combination of observations on selected RVs. Our goal is to minimize the regret, defined as the difference between the summed reward obtained by an optimal static policy that knew the mean of each RV, and that obtained by a specified learning policy that does not know. A prior result for this problem has achieved zero regret (the expect of regret over time approaches zero when time goes to infinity), but dependent on probability distribution of strategies generated by the learning policy. The regret becomes arbitrarily large if the difference between the reward of the best and second best strategy approaches zero. Meanwhile, when there are exponential number of combinations, naive extension of a prior distribution-free policy would cause poor performance in terms of regret, computation and space complexity. We propose an efficient Distribution-Free Learning (DFL) policy that achieves zero regret without dependence on probability distribution of strategies. Our learning policy only requires time and space complexity $O(K)$. When the linear combination is involved with NP-hard problems, our policy provides a flexible scheme to choose possible approximation algorithms to solve the problem efficiently while retaining zero regret.", "comment": null, "journal": null, "doi": null }, { "version": "v3", "updated": "2014-10-05T04:20:27.000Z" } ], "analyses": { "keywords": [ "multi-armed bandits", "combinatorial strategies", "stochastic bandits", "zero regret", "second best strategy approaches zero" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable", "adsabs": "2013arXiv1307.5438L" } } }