arXiv Analytics

Sign in

arXiv:2001.06658 [cs.CV]AbstractReferencesReviewsResources

Text-to-Image Generation with Attention Based Recurrent Neural Networks

Tehseen Zia, Shahan Arif, Shakeeb Murtaza, Mirza Ahsan Ullah

Published 2020-01-18Version 1

Conditional image modeling based on textual descriptions is a relatively new domain in unsupervised learning. Previous approaches use a latent variable model and generative adversarial networks. While the formers are approximated by using variational auto-encoders and rely on the intractable inference that can hamper their performance, the latter is unstable to train due to Nash equilibrium based objective function. We develop a tractable and stable caption-based image generation model. The model uses an attention-based encoder to learn word-to-pixel dependencies. A conditional autoregressive based decoder is used for learning pixel-to-pixel dependencies and generating images. Experimentations are performed on Microsoft COCO, and MNIST-with-captions datasets and performance is evaluated by using the Structural Similarity Index. Results show that the proposed model performs better than contemporary approaches and generate better quality images. Keywords: Generative image modeling, autoregressive image modeling, caption-based image generation, neural attention, recurrent neural networks.

Related articles: Most relevant | Search more
arXiv:1712.00512 [cs.CV] (Published 2017-12-01)
Learning Neural Markers of Schizophrenia Disorder Using Recurrent Neural Networks
arXiv:1909.01939 [cs.CV] (Published 2019-09-03)
EleAtt-RNN: Adding Attentiveness to Neurons in Recurrent Neural Networks
arXiv:2303.09522 [cs.CV] (Published 2023-03-16)
$P+$: Extended Textual Conditioning in Text-to-Image Generation