arXiv Analytics

Sign in

arXiv:2112.04632 [cs.CV]AbstractReferencesReviewsResources

Recurrent Glimpse-based Decoder for Detection with Transformer

Zhe Chen, Jing Zhang, Dacheng Tao

Published 2021-12-09, updated 2022-04-12Version 2

Although detection with Transformer (DETR) is increasingly popular, its global attention modeling requires an extremely long training period to optimize and achieve promising detection performance. Alternative to existing studies that mainly develop advanced feature or embedding designs to tackle the training issue, we point out that the Region-of-Interest (RoI) based detection refinement can easily help mitigate the difficulty of training for DETR methods. Based on this, we introduce a novel REcurrent Glimpse-based decOder (REGO) in this paper. In particular, the REGO employs a multi-stage recurrent processing structure to help the attention of DETR gradually focus on foreground objects more accurately. In each processing stage, visual features are extracted as glimpse features from RoIs with enlarged bounding box areas of detection results from the previous stage. Then, a glimpse-based decoder is introduced to provide refined detection results based on both the glimpse features and the attention modeling outputs of the previous stage. In practice, REGO can be easily embedded in representative DETR variants while maintaining their fully end-to-end training and inference pipelines. In particular, REGO helps Deformable DETR achieve 44.8 AP on the MSCOCO dataset with only 36 training epochs, compared with the first DETR and the Deformable DETR that require 500 and 50 epochs to achieve comparable performance, respectively. Experiments also show that REGO consistently boosts the performance of different DETR detectors by up to 7% relative gain at the same setting of 50 training epochs. Code is available via https://github.com/zhechen/Deformable-DETR-REGO.

Related articles: Most relevant | Search more
arXiv:2206.07435 [cs.CV] (Published 2022-06-15)
Forecasting of depth and ego-motion with transformers and self-supervision
arXiv:2003.08077 [cs.CV] (Published 2020-03-18)
Scene Text Recognition via Transformer
arXiv:2205.13943 [cs.CV] (Published 2022-05-27)
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Siyuan Li et al.