arXiv Analytics

Sign in

arXiv:2002.12278 [cs.LG]AbstractReferencesReviewsResources

Testing Monotonicity of Machine Learning Models

Arnab Sharma, Heike Wehrheim

Published 2020-02-27Version 1

Today, machine learning (ML) models are increasingly applied in decision making. This induces an urgent need for quality assurance of ML models with respect to (often domain-dependent) requirements. Monotonicity is one such requirement. It specifies a software as 'learned' by an ML algorithm to give an increasing prediction with the increase of some attribute values. While there exist multiple ML algorithms for ensuring monotonicity of the generated model, approaches for checking monotonicity, in particular of black-box models, are largely lacking. In this work, we propose verification-based testing of monotonicity, i.e., the formal computation of test inputs on a white-box model via verification technology, and the automatic inference of this approximating white-box model from the black-box model under test. On the white-box model, the space of test inputs can be systematically explored by a directed computation of test cases. The empirical evaluation on 90 black-box models shows verification-based testing can outperform adaptive random testing as well as property-based techniques with respect to effectiveness and efficiency.

Related articles: Most relevant | Search more
arXiv:2010.00821 [cs.LG] (Published 2020-10-02)
Explainable Online Validation of Machine Learning Models for Practical Applications
arXiv:2009.05818 [cs.LG] (Published 2020-09-12)
MeLIME: Meaningful Local Explanation for Machine Learning Models
arXiv:2005.09512 [cs.LG] (Published 2020-05-18)
Applying Genetic Programming to Improve Interpretability in Machine Learning Models