arXiv Analytics

Sign in

arXiv:2008.08044 [stat.ML]AbstractReferencesReviewsResources

Bayesian neural networks and dimensionality reduction

Deborshee Sen, Theodore Papamarkou, David Dunson

Published 2020-08-18Version 1

In conducting non-linear dimensionality reduction and feature learning, it is common to suppose that the data lie near a lower-dimensional manifold. A class of model-based approaches for such problems includes latent variables in an unknown non-linear regression function; this includes Gaussian process latent variable models and variational auto-encoders (VAEs) as special cases. VAEs are artificial neural networks (ANNs) that employ approximations to make computation tractable; however, current implementations lack adequate uncertainty quantification in estimating the parameters, predictive densities, and lower-dimensional subspace, and can be unstable and lack interpretability in practice. We attempt to solve these problems by deploying Markov chain Monte Carlo sampling algorithms (MCMC) for Bayesian inference in ANN models with latent variables. We address issues of identifiability by imposing constraints on the ANN parameters as well as by using anchor points. This is demonstrated on simulated and real data examples. We find that current MCMC sampling schemes face fundamental challenges in neural networks involving latent variables, motivating new research directions.

Related articles: Most relevant | Search more
arXiv:1903.07594 [stat.ML] (Published 2019-03-18)
Combining Model and Parameter Uncertainty in Bayesian Neural Networks
arXiv:2309.16314 [stat.ML] (Published 2023-09-28)
A Primer on Bayesian Neural Networks: Review and Debates
arXiv:2304.02595 [stat.ML] (Published 2023-04-02)
Bayesian neural networks via MCMC: a Python-based tutorial