arXiv Analytics

Sign in

arXiv:2203.02114 [eess.IV]AbstractReferencesReviewsResources

MixCL: Pixel label matters to contrastive learning

Jun Li, Quan Quan, S. Kevin Zhou

Published 2022-03-04Version 1

Contrastive learning and self-supervised techniques have gained prevalence in computer vision for the past few years. It is essential for medical image analysis, which is often notorious for its lack of annotations. Most existing self-supervised methods applied in natural imaging tasks focus on designing proxy tasks for unlabeled data. For example, contrastive learning is often based on the fact that an image and its transformed version share the same identity. However, pixel annotations contain much valuable information for medical image segmentation, which is largely ignored in contrastive learning. In this work, we propose a novel pre-training framework called Mixed Contrastive Learning (MixCL) that leverages both image identities and pixel labels for better modeling by maintaining identity consistency, label consistency, and reconstruction consistency together. Consequently, thus pre-trained model has more robust representations that characterize medical images. Extensive experiments demonstrate the effectiveness of the proposed method, improving the baseline by 5.28% and 14.12% in Dice coefficient when 5% labeled data of Spleen and 15% of BTVC are used in fine-tuning, respectively.

Related articles: Most relevant | Search more
arXiv:2101.05145 [eess.IV] (Published 2021-01-13)
Self-Supervised Vessel Enhancement Using Flow-Based Consistencies
arXiv:2303.15214 [eess.IV] (Published 2023-03-27)
CLIDiM: Contrastive Learning for Image Denoising in Microscopy
arXiv:2409.16042 [eess.IV] (Published 2024-09-24)
Enhanced Unsupervised Image-to-Image Translation Using Contrastive Learning and Histogram of Oriented Gradients