arXiv Analytics

Sign in

arXiv:2307.09402 [eess.IV]AbstractReferencesReviewsResources

Study of Vision Transformers for Covid-19 Detection from Chest X-rays

Sandeep Angara, Sharath Thirunagaru

Published 2023-07-17Version 1

The COVID-19 pandemic has led to a global health crisis, highlighting the need for rapid and accurate virus detection. This research paper examines transfer learning with vision transformers for COVID-19 detection, known for its excellent performance in image recognition tasks. We leverage the capability of Vision Transformers to capture global context and learn complex patterns from chest X-ray images. In this work, we explored the recent state-of-art transformer models to detect Covid-19 using CXR images such as vision transformer (ViT), Swin-transformer, Max vision transformer (MViT), and Pyramid Vision transformer (PVT). Through the utilization of transfer learning with IMAGENET weights, the models achieved an impressive accuracy range of 98.75% to 99.5%. Our experiments demonstrate that Vision Transformers achieve state-of-the-art performance in COVID-19 detection, outperforming traditional methods and even Convolutional Neural Networks (CNNs). The results highlight the potential of Vision Transformers as a powerful tool for COVID-19 detection, with implications for improving the efficiency and accuracy of screening and diagnosis in clinical settings.

Related articles: Most relevant | Search more
arXiv:2004.12592 [eess.IV] (Published 2020-04-27)
Robust Screening of COVID-19 from Chest X-ray via Discriminative Cost-Sensitive Learning
arXiv:2012.13582 [eess.IV] (Published 2020-12-25)
Deep Learning Methods for Screening Pulmonary Tuberculosis Using Chest X-rays
arXiv:2105.08147 [eess.IV] (Published 2021-05-17, updated 2021-05-20)
COVID-19 Lung Lesion Segmentation Using a Sparsely Supervised Mask R-CNN on Chest X-rays Automatically Computed from Volumetric CTs