arXiv Analytics

Sign in

arXiv:2303.07245 [cs.IT]AbstractReferencesReviewsResources

Concentration without Independence via Information Measures

Amedeo Roberto Esposito, Marco Mondelli

Published 2023-03-13, updated 2023-10-30Version 2

We propose a novel approach to concentration for non-independent random variables. The main idea is to ``pretend'' that the random variables are independent and pay a multiplicative price measuring how far they are from actually being independent. This price is encapsulated in the Hellinger integral between the joint and the product of the marginals, which is then upper bounded leveraging tensorisation properties. Our bounds represent a natural generalisation of concentration inequalities in the presence of dependence: we recover exactly the classical bounds (McDiarmid's inequality) when the random variables are independent. Furthermore, in a ``large deviations'' regime, we obtain the same decay in the probability as for the independent case, even when the random variables display non-trivial dependencies. To show this, we consider a number of applications of interest. First, we provide a bound for Markov chains with finite state space. Then, we consider the Simple Symmetric Random Walk, which is a non-contracting Markov chain, and a non-Markovian setting in which the stochastic process depends on its entire past. To conclude, we propose an application to Markov Chain Monte Carlo methods, where our approach leads to an improved lower bound on the minimum burn-in period required to reach a certain accuracy. In all of these settings, we provide a regime of parameters in which our bound fares better than what the state of the art can provide.

Related articles: Most relevant | Search more
arXiv:1812.00127 [cs.IT] (Published 2018-12-01)
Markov chain Monte Carlo Methods For Lattice Gaussian Sampling: Lattice Reduction and Decoding Optimization
arXiv:1811.12719 [cs.IT] (Published 2018-11-30)
Markov chain Monte Carlo Methods For Lattice Gaussian Sampling:Convergence Analysis and Enhancement
arXiv:2107.01975 [cs.IT] (Published 2021-07-05)
The information loss of a stochastic map