arXiv Analytics

Sign in

arXiv:2110.08884 [stat.ML]AbstractReferencesReviewsResources

Persuasion by Dimension Reduction

Semyon Malamud, Andreas Schrimpf

Published 2021-10-17, updated 2022-10-04Version 2

How should an agent (the sender) observing multi-dimensional data (the state vector) persuade another agent to take the desired action? We show that it is always optimal for the sender to perform a (non-linear) dimension reduction by projecting the state vector onto a lower-dimensional object that we call the "optimal information manifold." We characterize geometric properties of this manifold and link them to the sender's preferences. Optimal policy splits information into "good" and "bad" components. When the sender's marginal utility is linear, revealing the full magnitude of good information is always optimal. In contrast, with concave marginal utility, optimal information design conceals the extreme realizations of good information and only reveals its direction (sign). We illustrate these effects by explicitly solving several multi-dimensional Bayesian persuasion problems.

Comments: This paper has been replaced and subsumed by arXiv:2210.00637. arXiv admin note: text overlap with arXiv:2102.10909
Related articles: Most relevant | Search more
arXiv:0902.4389 [stat.ML] (Published 2009-02-25)
Dimension reduction in representation of the data
arXiv:2112.09746 [stat.ML] (Published 2021-12-17, updated 2022-02-09)
Supervised Multivariate Learning with Simultaneous Feature Auto-grouping and Dimension Reduction
arXiv:1006.5060 [stat.ML] (Published 2010-06-25, updated 2010-07-01)
Learning sparse gradients for variable selection and dimension reduction