arXiv Analytics

Sign in

arXiv:1408.0822 [math.PR]AbstractReferencesReviewsResources

Surprise probabilities in Markov chains

James Norris, Yuval Peres, Alex Zhai

Published 2014-08-04Version 1

In a Markov chain started at a state $x$, the hitting time $\tau(y)$ is the first time that the chain reaches another state $y$. We study the probability $\mathbf{P}_x(\tau(y) = t)$ that the first visit to $y$ occurs precisely at a given time $t$. Informally speaking, the event that a new state is visited at a large time $t$ may be considered a "surprise". We prove the following three bounds: 1) In any Markov chain with $n$ states, $\mathbf{P}_x(\tau(y) = t) \le \frac{n}{t}$. 2) In a reversible chain with $n$ states, $\mathbf{P}_x(\tau(y) = t) \le \frac{\sqrt{2n}}{t}$ for $t \ge 4n + 4$. 3) For random walk on a simple graph with $n \ge 2$ vertices, $\mathbf{P}_x(\tau(y) = t) \le \frac{4e \log n}{t}$. We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an $n$-vertex graph, for every initial vertex $x$, \[ \sum_y \left( \sup_{t \ge 0} p^t(x, y) \right) = O(\log n). \]

Related articles: Most relevant | Search more
arXiv:1506.07623 [math.PR] (Published 2015-06-25)
Induction of Markov chains, drift functions and application to the LLN, the CLT and the LIL with a random walk on $\mathbb{R}_+$ as an example
arXiv:math/9701223 [math.PR] (Published 1997-01-23)
Markov chains in a field of traps
arXiv:1602.06512 [math.PR] (Published 2016-02-21)
Waiting times and stopping probabilities for patterns in Markov chains