arXiv Analytics

Sign in

arXiv:2404.01356 [cs.LG]AbstractReferencesReviewsResources

The Double-Edged Sword of Input Perturbations to Robust Accurate Fairness

Xuran Li, Peng Wu, Yanting Chen, Xingjun Ma, Zhen Zhang, Kaixiang Dong

Published 2024-04-01Version 1

Deep neural networks (DNNs) are known to be sensitive to adversarial input perturbations, leading to a reduction in either prediction accuracy or individual fairness. To jointly characterize the susceptibility of prediction accuracy and individual fairness to adversarial perturbations, we introduce a novel robustness definition termed robust accurate fairness. Informally, robust accurate fairness requires that predictions for an instance and its similar counterparts consistently align with the ground truth when subjected to input perturbations. We propose an adversarial attack approach dubbed RAFair to expose false or biased adversarial defects in DNN, which either deceive accuracy or compromise individual fairness. Then, we show that such adversarial instances can be effectively addressed by carefully designed benign perturbations, correcting their predictions to be accurate and fair. Our work explores the double-edged sword of input perturbations to robust accurate fairness in DNN and the potential of using benign perturbations to correct adversarial instances.

Related articles: Most relevant | Search more
arXiv:2107.11671 [cs.LG] (Published 2021-07-24)
Adversarial training may be a double-edged sword
arXiv:2303.01456 [cs.LG] (Published 2023-03-02, updated 2023-10-31)
The Double-Edged Sword of Implicit Bias: Generalization vs. Robustness in ReLU Networks
arXiv:1909.05314 [cs.LG] (Published 2019-09-11)
ScieNet: Deep Learning with Spike-assisted Contextual Information Extraction