{ "id": "1906.06419", "version": "v1", "published": "2019-06-14T21:58:58.000Z", "updated": "2019-06-14T21:58:58.000Z", "title": "Learning Correlated Latent Representations with Adaptive Priors", "authors": [ "Da Tang", "Dawen Liang", "Nicholas Ruozzi", "Tony Jebara" ], "comment": "12pages, 2 figures", "categories": [ "cs.LG", "stat.ML" ], "abstract": "Variational Auto-Encoders (VAEs) have been widely applied for learning compact low-dimensional latent representations for high-dimensional data. When the correlation structure among data points is available, previous work proposed Correlated Variational Auto-Encoders (CVAEs) which employ a structured mixture model as prior and a structured variational posterior for each mixture component to enforce the learned latent representations to follow the same correlation structure. However, as we demonstrate in this paper, such a choice can not guarantee that CVAEs can capture all of the correlations. Furthermore, it prevents us from obtaining a tractable joint and marginal variational distribution. To address these issues, we propose Adaptive Correlated Variational Auto-Encoders (ACVAEs), which apply an adaptive prior distribution that can be adjusted during training, and learn a tractable joint distribution via a saddle-point optimization procedure. Its tractable form also enables further refinement with belief propagation. Experimental results on two real datasets show that ACVAEs outperform other benchmarks significantly.", "revisions": [ { "version": "v1", "updated": "2019-06-14T21:58:58.000Z" } ], "analyses": { "keywords": [ "learning correlated latent representations", "adaptive prior", "correlated variational auto-encoders", "correlation structure", "learning compact low-dimensional latent representations" ], "note": { "typesetting": "TeX", "pages": 12, "language": "en", "license": "arXiv", "status": "editable" } } }