arXiv Analytics

Sign in

arXiv:1609.08976 [stat.ML]AbstractReferencesReviewsResources

Variational Autoencoder for Deep Learning of Images, Labels and Captions

Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, Lawrence Carin

Published 2016-09-28Version 1

A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features/code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label/caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence/absence of associated labels/captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone.

Related articles: Most relevant | Search more
arXiv:1804.09060 [stat.ML] (Published 2018-04-24)
An Information-Theoretic View for Deep Learning
arXiv:1804.10988 [stat.ML] (Published 2018-04-29)
SHADE: Information-Based Regularization for Deep Learning
arXiv:1805.05814 [stat.ML] (Published 2018-05-14)
SHADE: Information-Based Regularization for Deep Learning