arXiv Analytics

Sign in

arXiv:2107.02174 [cs.CV]AbstractReferencesReviewsResources

What Makes for Hierarchical Vision Transformer?

Yuxin Fang, Xinggang Wang, Rui Wu, Jianwei Niu, Wenyu Liu

Published 2021-07-05Version 1

Recent studies show that hierarchical Vision Transformer with interleaved non-overlapped intra window self-attention \& shifted window self-attention is able to achieve state-of-the-art performance in various visual recognition tasks and challenges CNN's dense sliding window paradigm. Most follow-up works try to replace shifted window operation with other kinds of cross window communication while treating self-attention as the de-facto standard for intra window information aggregation. In this short preprint, we question whether self-attention is the only choice for hierarchical Vision Transformer to attain strong performance, and what makes for hierarchical Vision Transformer? We replace self-attention layers in Swin Transformer and Shuffle Transformer with simple linear mapping and keep other components unchanged. The resulting architecture with 25.4M parameters and 4.2G FLOPs achieves 80.5\% Top-1 accuracy, compared to 81.3\% for Swin Transformer with 28.3M parameters and 4.5G FLOPs. We also experiment with other alternatives to self-attention for context aggregation inside each non-overlapped window, which all give similar competitive results under the same architecture. Our study reveals that the \textbf{macro architecture} of Swin model families (i.e., interleaved intra window \& cross window communications), other than specific aggregation layers or specific means of cross window communication, may be more responsible for its strong performance and is the real challenger to CNN's dense sliding window paradigm.

Related articles: Most relevant | Search more
arXiv:2306.17373 [cs.CV] (Published 2023-06-30)
HVTSurv: Hierarchical Vision Transformer for Patient-Level Survival Prediction from Whole Slide Image
arXiv:2304.04237 [cs.CV] (Published 2023-04-09)
Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention
arXiv:2103.14030 [cs.CV] (Published 2021-03-25)
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu et al.