arXiv Analytics

Sign in

arXiv:2102.03778 [cs.CV]AbstractReferencesReviewsResources

Adversarial Training of Variational Auto-encoders for Continual Zero-shot Learning

Subhankar Ghosh

Published 2021-02-07Version 1

Most of the existing artificial neural networks(ANNs) fail to learn continually due to catastrophic forgetting, while humans can do the same by maintaining previous tasks' performances. Although storing all the previous data can alleviate the problem, it takes a large memory, infeasible in real-world utilization. We propose a continual zero-shot learning model that is more suitable in real-case scenarios to address the issue that can learn sequentially and distinguish classes the model has not seen during training. We present a hybrid network that consists of a shared VAE module to hold information of all tasks and task-specific private VAE modules for each task. The model's size grows with each task to prevent catastrophic forgetting of task-specific skills, and it includes a replay approach to preserve shared skills. We demonstrate our hybrid model is effective on several datasets, i.e., CUB, AWA1, AWA2, and aPY. We show our method is superior on class sequentially learning with ZSL(Zero-Shot Learning) and GZSL(Generalized Zero-Shot Learning).

Related articles: Most relevant | Search more
arXiv:2301.09879 [cs.CV] (Published 2023-01-24)
Data Augmentation Alone Can Improve Adversarial Training
arXiv:2303.06241 [cs.CV] (Published 2023-03-10, updated 2023-04-05)
Do we need entire training data for adversarial training?
arXiv:2406.01867 [cs.CV] (Published 2024-06-04)
MoLA: Motion Generation and Editing with Latent Diffusion Enhanced by Adversarial Training