arXiv Analytics

Sign in

arXiv:1907.04669 [cs.LG]AbstractReferencesReviewsResources

Optimal Explanations of Linear Models

Dimitris Bertsimas, Arthur Delarue, Patrick Jaillet, Sebastien Martin

Published 2019-07-08Version 1

When predictive models are used to support complex and important decisions, the ability to explain a model's reasoning can increase trust, expose hidden biases, and reduce vulnerability to adversarial attacks. However, attempts at interpreting models are often ad hoc and application-specific, and the concept of interpretability itself is not well-defined. We propose a general optimization framework to create explanations for linear models. Our methodology decomposes a linear model into a sequence of models of increasing complexity using coordinate updates on the coefficients. Computing this decomposition optimally is a difficult optimization problem for which we propose exact algorithms and scalable heuristics. By solving this problem, we can derive a parametrized family of interpretability metrics for linear models that generalizes typical proxies, and study the tradeoff between interpretability and predictive accuracy.

Comments: arXiv admin note: substantial text overlap with arXiv:1907.03419
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1907.03419 [cs.LG] (Published 2019-07-08)
The Price of Interpretability
arXiv:2211.01858 [cs.LG] (Published 2022-11-03)
Relating graph auto-encoders to linear models
arXiv:2505.08550 [cs.LG] (Published 2025-05-12)
OLinear: A Linear Model for Time Series Forecasting in Orthogonally Transformed Domain
Wenzhen Yue et al.