arXiv Analytics

Sign in

arXiv:2101.06369 [stat.CO]AbstractReferencesReviewsResources

Non-convex weakly smooth Langevin Monte Carlo using regularization

Dao Nguyen, Xin Dang, Yixin Chen

Published 2021-01-16Version 1

Discretization of continuous-time diffusion processes is a widely recognized method for sampling. However, the canonical Euler Maruyama discretization of the Langevin diffusion process, referred as Langevin Monte Carlo (LMC), studied mostly in the context of smooth (gradient Lipschitz) and strongly log-concave densities, is a considerable hindrance for its deployment in many sciences, including computational statistics and statistical learning. In this paper, we establish several theoretical contributions to the literature on such sampling methods for weakly smooth and non-convex densities. Particularly, we use convexification of nonconvex domain \citep{ma2019sampling} in combination with regularization to prove convergence in Kullback-Leibler (KL) divergence with the number of iterations to reach $\epsilon-$ neighborhood of a target distribution in only polynomial dependence on the dimension. We relax the conditions of \citep{vempala2019rapid} and prove convergence guarantees under isoperimetry, degenerated convex, and non strongly convex at infinity.

Related articles:
arXiv:2002.10071 [stat.CO] (Published 2020-02-24)
Weakly smooth Langevin Monte Carlo using p-generalized Gaussian smoothing
arXiv:1803.04947 [stat.CO] (Published 2018-03-13)
Takeuchi's Information Criteria as a form of Regularization
arXiv:2112.09311 [stat.CO] (Published 2021-12-17, updated 2022-02-22)
Unadjusted Langevin algorithm for sampling a mixture of weakly smooth potentials