arXiv Analytics

Sign in

arXiv:2207.03041 [cs.CV]AbstractReferencesReviewsResources

Vision Transformers: State of the Art and Research Challenges

Bo-Kai Ruan, Hong-Han Shuai, Wen-Huang Cheng

Published 2022-07-07Version 1

Transformers have achieved great success in natural language processing. Due to the powerful capability of self-attention mechanism in transformers, researchers develop the vision transformers for a variety of computer vision tasks, such as image recognition, object detection, image segmentation, pose estimation, and 3D reconstruction. This paper presents a comprehensive overview of the literature on different architecture designs and training tricks (including self-supervised learning) for vision transformers. Our goal is to provide a systematic review with the open research opportunities.

Related articles: Most relevant | Search more
arXiv:2104.10972 [cs.CV] (Published 2021-04-22)
ImageNet-21K Pretraining for the Masses
arXiv:1804.03928 [cs.CV] (Published 2018-04-11)
Deep Learning For Computer Vision Tasks: A review
arXiv:2103.09950 [cs.CV] (Published 2021-03-17)
Learning to Resize Images for Computer Vision Tasks