arXiv Analytics

Sign in

arXiv:2211.12739 [cs.CV]AbstractReferencesReviewsResources

Texts as Images in Prompt Tuning for Multi-Label Image Recognition

Zixian Guo, Bowen Dong, Zhilong Ji, Jinfeng Bai, Yiwen Guo, Wangmeng Zuo

Published 2022-11-23Version 1

Prompt tuning has been employed as an efficient way to adapt large vision-language pre-trained models (e.g. CLIP) to various downstream tasks in data-limited or label-limited settings. Nonetheless, visual data (e.g., images) is by default prerequisite for learning prompts in existing methods. In this work, we advocate that the effectiveness of image-text contrastive learning in aligning the two modalities (for training CLIP) further makes it feasible to treat texts as images for prompt tuning and introduce TaI prompting. In contrast to the visual data, text descriptions are easy to collect, and their class labels can be directly derived. Particularly, we apply TaI prompting to multi-label image recognition, where sentences in the wild serve as alternatives to images for prompt tuning. Moreover, with TaI, double-grained prompt tuning (TaI-DPT) is further presented to extract both coarse-grained and fine-grained embeddings for enhancing the multi-label recognition performance. Experimental results show that our proposed TaI-DPT outperforms zero-shot CLIP by a large margin on multiple benchmarks, e.g., MS-COCO, VOC2007, and NUS-WIDE, while it can be combined with existing methods of prompting from images to improve recognition performance further. Code is released at https://github.com/guozix/TaI-DPT.

Related articles: Most relevant | Search more
arXiv:2407.20920 [cs.CV] (Published 2024-07-30)
SSPA: Split-and-Synthesize Prompting with Gated Alignments for Multi-Label Image Recognition
arXiv:1909.13005 [cs.CV] (Published 2019-09-28)
Learning Category Correlations for Multi-label Image Recognition with Graph Networks
arXiv:2107.11159 [cs.CV] (Published 2021-07-23)
Learning Discriminative Representations for Multi-Label Image Recognition