arXiv Analytics

Sign in

arXiv:2007.07423 [cs.CV]AbstractReferencesReviewsResources

Comparing to Learn: Surpassing ImageNet Pretraining on Radiographs By Comparing Image Representations

Hong-Yu Zhou, Shuang Yu, Cheng Bian, Yifan Hu, Kai Ma, Yefeng Zheng

Published 2020-07-15Version 1

In deep learning era, pretrained models play an important role in medical image analysis, in which ImageNet pretraining has been widely adopted as the best way. However, it is undeniable that there exists an obvious domain gap between natural images and medical images. To bridge this gap, we propose a new pretraining method which learns from 700k radiographs given no manual annotations. We call our method as Comparing to Learn (C2L) because it learns robust features by comparing different image representations. To verify the effectiveness of C2L, we conduct comprehensive ablation studies and evaluate it on different tasks and datasets. The experimental results on radiographs show that C2L can outperform ImageNet pretraining and previous state-of-the-art approaches significantly. Code and models are available.

Related articles: Most relevant | Search more
arXiv:2307.10504 [cs.CV] (Published 2023-07-20)
Identifying Interpretable Subspaces in Image Representations
arXiv:2312.02205 [cs.CV] (Published 2023-12-02)
Disentangling the Effects of Data Augmentation and Format Transform in Self-Supervised Learning of Image Representations
arXiv:2102.06982 [cs.CV] (Published 2021-02-13)
DeepRA: Predicting Joint Damage From Radiographs Using CNN with Attention