arXiv Analytics

Sign in

arXiv:2207.06137 [stat.ML]AbstractReferencesReviewsResources

Probing the Robustness of Independent Mechanism Analysis for Representation Learning

Joanna Sliwa, Shubhangi Ghosh, Vincent Stimper, Luigi Gresele, Bernhard Schölkopf

Published 2022-07-13Version 1

One aim of representation learning is to recover the original latent code that generated the data, a task which requires additional information or inductive biases. A recently proposed approach termed Independent Mechanism Analysis (IMA) postulates that each latent source should influence the observed mixtures independently, complementing standard nonlinear independent component analysis, and taking inspiration from the principle of independent causal mechanisms. While it was shown in theory and experiments that IMA helps recovering the true latents, the method's performance was so far only characterized when the modeling assumptions are exactly satisfied. Here, we test the method's robustness to violations of the underlying assumptions. We find that the benefits of IMA-based regularization for recovering the true sources extend to mixing functions with various degrees of violation of the IMA principle, while standard regularizers do not provide the same merits. Moreover, we show that unregularized maximum likelihood recovers mixing functions which systematically deviate from the IMA principle, and provide an argument elucidating the benefits of IMA-based regularization.

Related articles: Most relevant | Search more
arXiv:2202.06844 [stat.ML] (Published 2022-02-14)
On Pitfalls of Identifiability in Unsupervised Learning. A Note on: "Desiderata for Representation Learning: A Causal Perspective"
arXiv:2408.05854 [stat.ML] (Published 2024-08-11)
On the Robustness of Kernel Goodness-of-Fit Tests
arXiv:1807.10272 [stat.ML] (Published 2018-07-26)
Evaluating and Understanding the Robustness of Adversarial Logit Pairing