arXiv Analytics

Sign in

arXiv:2005.02561 [eess.IV]AbstractReferencesReviewsResources

Multi-task pre-training of deep neural networks

Romain Mormont, Pierre Geurts, Raphaël Marée

Published 2020-05-05Version 1

In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.

Comments: Accepted for publication in the IEEE Journal of Biomedical and Health Informatics, special issue on Computational Pathology
Categories: eess.IV, cs.CV, cs.LG
Related articles: Most relevant | Search more
arXiv:2309.02576 [eess.IV] (Published 2023-09-05)
Emphysema Subtyping on Thoracic Computed Tomography Scans using Deep Neural Networks
arXiv:2202.02000 [eess.IV] (Published 2022-02-04)
Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks
arXiv:2010.01362 [eess.IV] (Published 2020-10-03)
COVID-19 Classification of X-ray Images Using Deep Neural Networks