arXiv Analytics

Sign in

arXiv:2501.18331 [cs.LG]AbstractReferencesReviewsResources

Stream-Based Monitoring of Algorithmic Fairness

Jan Baumeister, Bernd Finkbeiner, Frederik Scheerer, Julian Siber, Tobias Wagenpfeil

Published 2025-01-30Version 1

Automatic decision and prediction systems are increasingly deployed in applications where they significantly impact the livelihood of people, such as for predicting the creditworthiness of loan applicants or the recidivism risk of defendants. These applications have given rise to a new class of algorithmic-fairness specifications that require the systems to decide and predict without bias against social groups. Verifying these specifications statically is often out of reach for realistic systems, since the systems may, e.g., employ complex learning components, and reason over a large input space. In this paper, we therefore propose stream-based monitoring as a solution for verifying the algorithmic fairness of decision and prediction systems at runtime. Concretely, we present a principled way to formalize algorithmic fairness over temporal data streams in the specification language RTLola and demonstrate the efficacy of this approach on a number of benchmarks. Besides synthetic scenarios that particularly highlight its efficiency on streams with a scaling amount of data, we notably evaluate the monitor on real-world data from the recidivism prediction tool COMPAS.

Comments: 31st International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2025)
Categories: cs.LG, cs.LO, cs.SE
Related articles: Most relevant | Search more
arXiv:2411.02569 [cs.LG] (Published 2024-11-04, updated 2024-11-16)
The Intersectionality Problem for Algorithmic Fairness
arXiv:2002.07676 [cs.LG] (Published 2020-02-18)
A Resolution in Algorithmic Fairness: Calibrated Scores for Fair Classifications
arXiv:2010.07249 [cs.LG] (Published 2020-10-14)
Exchanging Lessons Between Algorithmic Fairness and Domain Generalization