arXiv Analytics

Sign in

arXiv:2502.06777 [stat.ML]AbstractReferencesReviewsResources

Learning an Optimal Assortment Policy under Observational Data

Yuxuan Han, Han Zhong, Miao Lu, Jose Blanchet, Zhengyuan Zhou

Published 2025-02-10Version 1

We study the fundamental problem of offline assortment optimization under the Multinomial Logit (MNL) model, where sellers must determine the optimal subset of the products to offer based solely on historical customer choice data. While most existing approaches to learning-based assortment optimization focus on the online learning of the optimal assortment through repeated interactions with customers, such exploration can be costly or even impractical in many real-world settings. In this paper, we consider the offline learning paradigm and investigate the minimal data requirements for efficient offline assortment optimization. To this end, we introduce Pessimistic Rank-Breaking (PRB), an algorithm that combines rank-breaking with pessimistic estimation. We prove that PRB is nearly minimax optimal by establishing the tight suboptimality upper bound and a nearly matching lower bound. This further shows that "optimal item coverage" - where each item in the optimal assortment appears sufficiently often in the historical data - is both sufficient and necessary for efficient offline learning. This significantly relaxes the previous requirement of observing the complete optimal assortment in the data. Our results provide fundamental insights into the data requirements for offline assortment optimization under the MNL model.

Related articles: Most relevant | Search more
arXiv:1807.04183 [stat.ML] (Published 2018-07-11)
Optimization over Continuous and Multi-dimensional Decisions with Observational Data
arXiv:2403.20250 [stat.ML] (Published 2024-03-29)
Optimal Policy Learning with Observational Data in Multi-Action Scenarios: Estimation, Risk Preference, and Potential Failures
arXiv:1608.08925 [stat.ML] (Published 2016-08-31)
Learning to Personalize from Observational Data