arXiv:1910.02760 [cs.LG]AbstractReferencesReviewsResources
Negative Sampling in Variational Autoencoders
Adrián Csiszárik, Beatrix Benkő, Dániel Varga
Published 2019-10-07Version 1
We propose negative sampling as an approach to improve the notoriously bad out-of-distribution likelihood estimates of Variational Autoencoder models. Our model pushes latent images of negative samples away from the prior. When the source of negative samples is an auxiliary dataset, such a model can vastly improve on baselines when evaluated on OOD detection tasks. Perhaps more surprisingly, we present a fully unsupervised variant that can also significantly improve detection performance: using the output of the generator as negative samples results in a fully unsupervised model that can be interpreted as adversarially trained.
Related articles: Most relevant | Search more
arXiv:2006.16499 [cs.LG] (Published 2020-06-30)
SCE: Scalable Network Embedding from Sparsest Cut
arXiv:2303.17475 [cs.LG] (Published 2023-03-30)
Efficient distributed representations beyond negative sampling
arXiv:1410.8251 [cs.LG] (Published 2014-10-30)
Notes on Noise Contrastive Estimation and Negative Sampling