{ "id": "2304.01384", "version": "v1", "published": "2023-04-03T21:17:35.000Z", "updated": "2023-04-03T21:17:35.000Z", "title": "Large Deviations for Empirical Measures of Self-Interacting Markov Chains", "authors": [ "Amarjit Budhiraja", "Adam Waterbury", "Pavlos Zoubouloglou" ], "comment": "54 pages", "categories": [ "math.PR", "math.OC" ], "abstract": "Let $\\Delta^o$ be a finite set and, for each probability measure $m$ on $\\Delta^o$, let $G(m)$ be a transition probability kernel on $\\Delta^o$. Fix $x_0 \\in \\Delta^o$ and consider the chain $\\{X_n, \\; n \\in \\mathbb{N}_0\\}$ of $\\Delta^o$-valued random variables such that $X_0=x$, and given $X_0, \\ldots , X_n$, the conditional distribution of $X_{n+1}$ is $G(L^{n+1})(X_n, \\cdot)$, where $L^{n+1} = \\frac{1}{n+1} \\sum_{i=0}^{n} \\delta_{X_i}$ is the empirical measure at instant $n$. Under conditions on $G$ we establish a large deviation principle for the empirical measure sequence $\\{L^n, \\; n \\in \\mathbb{N}\\}$. As one application of this result we obtain large deviation asymptotics for the Aldous-Flannery-Palacios (1988) approximation scheme for quasistationary distributions of irreducible finite state Markov chains. The conditions on $G$ cover various other models of reinforced stochastic evolutions as well, including certain vertex reinforced and edge reinforced random walks and a variant of the PageRank algorithm. The particular case where $G(m)$ does not depend on $m$ corresponds to the classical results of Donsker and Varadhan (1975) on large deviations of empirical measures of Markov processes. However, unlike this classical setting, for the general self-interacting models considered here, the rate function takes a very different form; it is typically non-convex and is given through a dynamical variational formula with an infinite horizon discounted objective function.", "revisions": [ { "version": "v1", "updated": "2023-04-03T21:17:35.000Z" } ], "analyses": { "subjects": [ "60F10", "93E03" ], "keywords": [ "empirical measure", "self-interacting markov chains", "horizon discounted objective function", "irreducible finite state markov chains", "edge reinforced random walks" ], "note": { "typesetting": "TeX", "pages": 54, "language": "en", "license": "arXiv", "status": "editable" } } }