arXiv Analytics

Sign in

arXiv:2011.14015 [cs.LG]AbstractReferencesReviewsResources

Active Learning in CNNs via Expected Improvement Maximization

Udai G. Nagpal, David A Knowles

Published 2020-11-27Version 1

Deep learning models such as Convolutional Neural Networks (CNNs) have demonstrated high levels of effectiveness in a variety of domains, including computer vision and more recently, computational biology. However, training effective models often requires assembling and/or labeling large datasets, which may be prohibitively time-consuming or costly. Pool-based active learning techniques have the potential to mitigate these issues, leveraging models trained on limited data to selectively query unlabeled data points from a pool in an attempt to expedite the learning process. Here we present "Dropout-based Expected IMprOvementS" (DEIMOS), a flexible and computationally-efficient approach to active learning that queries points that are expected to maximize the model's improvement across a representative sample of points. The proposed framework enables us to maintain a prediction covariance matrix capturing model uncertainty, and to dynamically update this matrix in order to generate diverse batches of points in the batch-mode setting. Our active learning results demonstrate that DEIMOS outperforms several existing baselines across multiple regression and classification tasks taken from computer vision and genomics.

Related articles: Most relevant | Search more
arXiv:1003.3967 [cs.LG] (Published 2010-03-21, updated 2017-12-06)
Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization
arXiv:1912.12557 [cs.LG] (Published 2019-12-29)
Active Learning in Video Tracking
arXiv:1911.07716 [cs.LG] (Published 2019-11-18)
The Effectiveness of Variational Autoencoders for Active Learning