arXiv Analytics

Sign in

arXiv:1504.01840 [cs.LG]AbstractReferencesReviewsResources

Autonomous CRM Control via CLV Approximation with Deep Reinforcement Learning in Discrete and Continuous Action Space

Yegor Tkachenko

Published 2015-04-08Version 1

The paper outlines a framework for autonomous control of a CRM (customer relationship management) system. First, it explores how a modified version of the widely accepted Recency-Frequency-Monetary Value system of metrics can be used to define the state space of clients or donors. Second, it describes a procedure to determine the optimal direct marketing action in discrete and continuous action space for the given individual, based on his position in the state space. The procedure involves the use of model-free Q-learning to train a deep neural network that relates a client's position in the state space to rewards associated with possible marketing actions. The estimated value function over the client state space can be interpreted as customer lifetime value, and thus allows for a quick plug-in estimation of CLV for a given client. Experimental results are presented, based on KDD Cup 1998 mailing dataset of donation solicitations.

Related articles: Most relevant | Search more
arXiv:2003.06959 [cs.LG] (Published 2020-03-16)
Particle-Based Adaptive Discretization for Continuous Control using Deep Reinforcement Learning
arXiv:1801.00209 [cs.LG] (Published 2017-12-30)
Deep Reinforcement Learning for List-wise Recommendations
arXiv:1901.02219 [cs.LG] (Published 2019-01-08)
Uncertainty-Based Out-of-Distribution Detection in Deep Reinforcement Learning