arXiv Analytics

Sign in

arXiv:2406.09546 [cs.CV]AbstractReferencesReviewsResources

Q-Mamba: On First Exploration of Vision Mamba for Image Quality Assessment

Fengbin Guan, Xin Li, Zihao Yu, Yiting Lu, Zhibo Chen

Published 2024-06-13Version 1

In this work, we take the first exploration of the recently popular foundation model, i.e., State Space Model/Mamba, in image quality assessment, aiming at observing and excavating the perception potential in vision Mamba. A series of works on Mamba has shown its significant potential in various fields, e.g., segmentation and classification. However, the perception capability of Mamba has been under-explored. Consequently, we propose Q-Mamba by revisiting and adapting the Mamba model for three crucial IQA tasks, i.e., task-specific, universal, and transferable IQA, which reveals that the Mamba model has obvious advantages compared with existing foundational models, e.g., Swin Transformer, ViT, and CNNs, in terms of perception and computational cost for IQA. To increase the transferability of Q-Mamba, we propose the StylePrompt tuning paradigm, where the basic lightweight mean and variance prompts are injected to assist the task-adaptive transfer learning of pre-trained Q-Mamba for different downstream IQA tasks. Compared with existing prompt tuning strategies, our proposed StylePrompt enables better perception transfer capability with less computational cost. Extensive experiments on multiple synthetic, authentic IQA datasets, and cross IQA datasets have demonstrated the effectiveness of our proposed Q-Mamba.

Related articles: Most relevant | Search more
arXiv:2411.12791 [cs.CV] (Published 2024-11-19)
Mitigating Perception Bias: A Training-Free Approach to Enhance LMM for Image Quality Assessment
arXiv:2112.00485 [cs.CV] (Published 2021-12-01, updated 2022-03-23)
Learning Transformer Features for Image Quality Assessment
arXiv:2405.04997 [cs.CV] (Published 2024-05-08, updated 2025-06-27)
Bridging the Gap Between Saliency Prediction and Image Quality Assessment