arXiv Analytics

Sign in

arXiv:2202.06844 [stat.ML]AbstractReferencesReviewsResources

On Pitfalls of Identifiability in Unsupervised Learning. A Note on: "Desiderata for Representation Learning: A Causal Perspective"

Shubhangi Ghosh, Luigi Gresele, Julius von Kügelgen, Michel Besserve, Bernhard Schölkopf

Published 2022-02-14Version 1

Model identifiability is a desirable property in the context of unsupervised representation learning. In absence thereof, different models may be observationally indistinguishable while yielding representations that are nontrivially related to one another, thus making the recovery of a ground truth generative model fundamentally impossible, as often shown through suitably constructed counterexamples. In this note, we discuss one such construction, illustrating a potential failure case of an identifiability result presented in "Desiderata for Representation Learning: A Causal Perspective" by Wang & Jordan (2021). The construction is based on the theory of nonlinear independent component analysis. We comment on implications of this and other counterexamples for identifiable representation learning.

Related articles: Most relevant | Search more
arXiv:2109.03795 [stat.ML] (Published 2021-09-08)
Desiderata for Representation Learning: A Causal Perspective
arXiv:2207.06137 [stat.ML] (Published 2022-07-13)
Probing the Robustness of Independent Mechanism Analysis for Representation Learning
arXiv:2408.16035 [stat.ML] (Published 2024-08-28)
Analysis of Diagnostics (Part II): Prevalence, Linear Independence, and Unsupervised Learning