arXiv Analytics

Sign in

arXiv:2010.13064 [stat.ML]AbstractReferencesReviewsResources

Further Analysis of Outlier Detection with Deep Generative Models

Ziyu Wang, Bin Dai, David Wipf, Jun Zhu

Published 2020-10-25Version 1

The recent, counter-intuitive discovery that deep generative models (DGMs) can frequently assign a higher likelihood to outliers has implications for both outlier detection applications as well as our overall understanding of generative modeling. In this work, we present a possible explanation for this phenomenon, starting from the observation that a model's typical set and high-density region may not conincide. From this vantage point we propose a novel outlier test, the empirical success of which suggests that the failure of existing likelihood-based outlier tests does not necessarily imply that the corresponding generative model is uncalibrated. We also conduct additional experiments to help disentangle the impact of low-level texture versus high-level semantics in differentiating outliers. In aggregate, these results suggest that modifications to the standard evaluation practices and benchmarks commonly applied in the literature are needed.

Related articles: Most relevant | Search more
arXiv:1810.09136 [stat.ML] (Published 2018-10-22)
Do Deep Generative Models Know What They Don't Know?
arXiv:2411.04216 [stat.ML] (Published 2024-11-06)
Debiasing Synthetic Data Generated by Deep Generative Models
arXiv:2304.08054 [stat.ML] (Published 2023-04-17)
Fed-MIWAE: Federated Imputation of Incomplete Data via Deep Generative Models