arXiv Analytics

Sign in

arXiv:2301.09031 [stat.ML]AbstractReferencesReviewsResources

Counterfactual (Non-)identifiability of Learned Structural Causal Models

Arash Nasr-Esfahany, Emre Kiciman

Published 2023-01-22Version 1

Recent advances in probabilistic generative modeling have motivated learning Structural Causal Models (SCM) from observational datasets using deep conditional generative models, also known as Deep Structural Causal Models (DSCM). If successful, DSCMs can be utilized for causal estimation tasks, e.g., for answering counterfactual queries. In this work, we warn practitioners about non-identifiability of counterfactual inference from observational data, even in the absence of unobserved confounding and assuming known causal structure. We prove counterfactual identifiability of monotonic generation mechanisms with single dimensional exogenous variables. For general generation mechanisms with multi-dimensional exogenous variables, we provide an impossibility result for counterfactual identifiability, motivating the need for parametric assumptions. As a practical approach, we propose a method for estimating worst-case errors of learned DSCMs' counterfactual predictions. The size of this error can be an essential metric for deciding whether or not DSCMs are a viable approach for counterfactual inference in a specific problem setting. In evaluation, our method confirms negligible counterfactual errors for an identifiable SCM from prior work, and also provides informative error bounds on counterfactual errors for a non-identifiable synthetic SCM.

Related articles: Most relevant | Search more
arXiv:2405.05025 [stat.ML] (Published 2024-05-08)
Learning Structural Causal Models through Deep Generative Models: Methods, Guarantees, and Challenges
arXiv:2106.08161 [stat.ML] (Published 2021-06-15)
Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness
arXiv:2202.06891 [stat.ML] (Published 2022-02-14)
Counterfactual inference for sequential experimental design