arXiv Analytics

Sign in

arXiv:2406.17815 [cs.CV]AbstractReferencesReviewsResources

SUM: Saliency Unification through Mamba for Visual Attention Modeling

Alireza Hosseini, Amirhossein Kazerouni, Saeed Akhavan, Michael Brudno, Babak Taati

Published 2024-06-25Version 1

Visual attention modeling, important for interpreting and prioritizing visual stimuli, plays a significant role in applications such as marketing, multimedia, and robotics. Traditional saliency prediction models, especially those based on Convolutional Neural Networks (CNNs) or Transformers, achieve notable success by leveraging large-scale annotated datasets. However, the current state-of-the-art (SOTA) models that use Transformers are computationally expensive. Additionally, separate models are often required for each image type, lacking a unified approach. In this paper, we propose Saliency Unification through Mamba (SUM), a novel approach that integrates the efficient long-range dependency modeling of Mamba with U-Net to provide a unified model for diverse image types. Using a novel Conditional Visual State Space (C-VSS) block, SUM dynamically adapts to various image types, including natural scenes, web pages, and commercial imagery, ensuring universal applicability across different data types. Our comprehensive evaluations across five benchmarks demonstrate that SUM seamlessly adapts to different visual characteristics and consistently outperforms existing models. These results position SUM as a versatile and powerful tool for advancing visual attention modeling, offering a robust solution universally applicable across different types of visual content.

Related articles:
arXiv:2307.13345 [cs.CV] (Published 2023-07-25)
Do humans and Convolutional Neural Networks attend to similar areas during scene classification: Effects of task and image type
arXiv:2411.14823 [cs.CV] (Published 2024-11-22)
Omni-IML: Towards Unified Image Manipulation Localization
arXiv:1910.13066 [cs.CV] (Published 2019-10-29)
SID4VAM: A Benchmark Dataset with Synthetic Images for Visual Attention Modeling