arXiv Analytics

Sign in

arXiv:1705.08395 [cs.LG]AbstractReferencesReviewsResources

Continual Learning in Generative Adversarial Nets

Ari Seff, Alex Beatson, Daniel Suo, Han Liu

Published 2017-05-23Version 1

Developments in deep generative models have allowed for tractable learning of high-dimensional data distributions. While the employed learning procedures typically assume that training data is drawn i.i.d. from the distribution of interest, it may be desirable to model distinct distributions which are observed sequentially, such as when different classes are encountered over time. Although conditional variations of deep generative models permit multiple distributions to be modeled by a single network in a disentangled fashion, they are susceptible to catastrophic forgetting when the distributions are encountered sequentially. In this paper, we adapt recent work in reducing catastrophic forgetting to the task of training generative adversarial networks on a sequence of distinct distributions, enabling continual generative modeling.

Related articles: Most relevant | Search more
arXiv:1906.00695 [cs.LG] (Published 2019-06-03)
Continual learning with hypernetworks
arXiv:1811.11682 [cs.LG] (Published 2018-11-28)
Experience Replay for Continual Learning
arXiv:1811.01146 [cs.LG] (Published 2018-11-03)
Closed-Loop GAN for continual Learning