arXiv Analytics

Sign in

arXiv:2405.09516 [stat.ML]AbstractReferencesReviewsResources

Generalization Bounds for Causal Regression: Insights, Guarantees and Sensitivity Analysis

Daniel Csillag, Claudio José Struchiner, Guilherme Tegoni Goedert

Published 2024-05-15Version 1

Many algorithms have been recently proposed for causal machine learning. Yet, there is little to no theory on their quality, especially considering finite samples. In this work, we propose a theory based on generalization bounds that provides such guarantees. By introducing a novel change-of-measure inequality, we are able to tightly bound the model loss in terms of the deviation of the treatment propensities over the population, which we show can be empirically limited. Our theory is fully rigorous and holds even in the face of hidden confounding and violations of positivity. We demonstrate our bounds on semi-synthetic and real data, showcasing their remarkable tightness and practical utility.

Related articles: Most relevant | Search more
arXiv:2312.00427 [stat.ML] (Published 2023-12-01)
From Mutual Information to Expected Dynamics: New Generalization Bounds for Heavy-Tailed SGD
arXiv:1902.00985 [stat.ML] (Published 2019-02-03)
Adversarial Networks and Autoencoders: The Primal-Dual Relationship and Generalization Bounds
arXiv:1902.01449 [stat.ML] (Published 2019-02-04)
Generalization Bounds For Unsupervised and Semi-Supervised Learning With Autoencoders