arXiv Analytics

Sign in

arXiv:2101.08699 [cs.LG]AbstractReferencesReviewsResources

An empirical evaluation of active inference in multi-armed bandits

Dimitrije Markovic, Hrvoje Stojic, Sarah Schwoebel, Stefan J. Kiebel

Published 2021-01-21Version 1

A key feature of sequential decision making under uncertainty is a need to balance between exploiting--choosing the best action according to the current knowledge, and exploring--obtaining information about values of other actions. The multi-armed bandit problem, a classical task that captures this trade-off, served as a vehicle in machine learning for developing bandit algorithms that proved to be useful in numerous industrial applications. The active inference framework, an approach to sequential decision making recently developed in neuroscience for understanding human and animal behaviour, is distinguished by its sophisticated strategy for resolving the exploration-exploitation trade-off. This makes active inference an exciting alternative to already established bandit algorithms. Here we derive an efficient and scalable approximate active inference algorithm and compare it to two state-of-the-art bandit algorithms: Bayesian upper confidence bound and optimistic Thompson sampling, on two types of bandit problems: a stationary and a dynamic switching bandit. Our empirical evaluation shows that the active inference algorithm does not produce efficient long-term behaviour in stationary bandits. However, in more challenging switching bandit problem active inference performs substantially better than the two bandit algorithms. The results open exciting venues for further research in theoretical and applied machine learning, as well as lend additional credibility to active inference as a general framework for studying human and animal behaviour.

Related articles: Most relevant | Search more
arXiv:1704.03926 [cs.LG] (Published 2017-04-12)
Value Directed Exploration in Multi-Armed Bandits with Structured Priors
arXiv:2205.13930 [cs.LG] (Published 2022-05-27)
Fairness and Welfare Quantification for Regret in Multi-Armed Bandits
arXiv:2009.06606 [cs.LG] (Published 2020-09-14)
Hellinger KL-UCB based Bandit Algorithms for Markovian and i.i.d. Settings