arXiv Analytics

Sign in

arXiv:2011.06796 [cs.LG]AbstractReferencesReviewsResources

Wisdom of the Ensemble: Improving Consistency of Deep Learning Models

Lijing Wang, Dipanjan Ghosh, Maria Teresa Gonzalez Diaz, Ahmed Farahat, Mahbubul Alam, Chetan Gupta, Jiangzhuo Chen, Madhav Marathe

Published 2020-11-13Version 1

Deep learning classifiers are assisting humans in making decisions and hence the user's trust in these models is of paramount importance. Trust is often a function of constant behavior. From an AI model perspective it means given the same input the user would expect the same output, especially for correct outputs, or in other words consistently correct outputs. This paper studies a model behavior in the context of periodic retraining of deployed models where the outputs from successive generations of the models might not agree on the correct labels assigned to the same input. We formally define consistency and correct-consistency of a learning model. We prove that consistency and correct-consistency of an ensemble learner is not less than the average consistency and correct-consistency of individual learners and correct-consistency can be improved with a probability by combining learners with accuracy not less than the average accuracy of ensemble component learners. To validate the theory using three datasets and two state-of-the-art deep learning classifiers we also propose an efficient dynamic snapshot ensemble method and demonstrate its value.

Related articles: Most relevant | Search more
arXiv:1811.11880 [cs.LG] (Published 2018-11-28)
Predicting the Computational Cost of Deep Learning Models
arXiv:2108.03579 [cs.LG] (Published 2021-08-08)
Expressive Power and Loss Surfaces of Deep Learning Models
arXiv:2107.02517 [cs.LG] (Published 2021-07-06)
An Evaluation of Machine Learning and Deep Learning Models for Drought Prediction using Weather Data