arXiv Analytics

Sign in

arXiv:2502.01027 [stat.ML]AbstractReferencesReviewsResources

Adversarial Robustness in Two-Stage Learning-to-Defer: Algorithms and Guarantees

Yannis Montreuil, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi

Published 2025-02-03Version 1

Learning-to-Defer (L2D) facilitates optimal task allocation between AI systems and decision-makers. Despite its potential, we show that current two-stage L2D frameworks are highly vulnerable to adversarial attacks, which can misdirect queries or overwhelm decision agents, significantly degrading system performance. This paper conducts the first comprehensive analysis of adversarial robustness in two-stage L2D frameworks. We introduce two novel attack strategies -- untargeted and targeted -- that exploit inherent structural vulnerabilities in these systems. To mitigate these threats, we propose SARD, a robust, convex, deferral algorithm rooted in Bayes and $(\mathcal{R},\mathcal{G})$-consistency. Our approach guarantees optimal task allocation under adversarial perturbations for all surrogates in the cross-entropy family. Extensive experiments on classification, regression, and multi-task benchmarks validate the robustness of SARD.

Related articles: Most relevant | Search more
arXiv:1805.12152 [stat.ML] (Published 2018-05-30)
There Is No Free Lunch In Adversarial Robustness (But There Are Unexpected Benefits)
arXiv:1905.13736 [stat.ML] (Published 2019-05-31)
Unlabeled Data Improves Adversarial Robustness
arXiv:2010.02558 [stat.ML] (Published 2020-10-06)
Constraining Logits by Bounded Function for Adversarial Robustness