arXiv Analytics

Sign in

arXiv:2412.20386 [cs.CV]AbstractReferencesReviewsResources

PTQ4VM: Post-Training Quantization for Visual Mamba

Younghyun Cho, Changhun Lee, Seonggon Kim, Eunhyeok Park

Published 2024-12-29Version 1

Visual Mamba is an approach that extends the selective space state model, Mamba, to vision tasks. It processes image tokens sequentially in a fixed order, accumulating information to generate outputs. Despite its growing popularity for delivering high-quality outputs at a low computational cost across various tasks, Visual Mamba is highly susceptible to quantization, which makes further performance improvements challenging. Our analysis reveals that the fixed token access order in Visual Mamba introduces unique quantization challenges, which we categorize into three main issues: 1) token-wise variance, 2) channel-wise outliers, and 3) a long tail of activations. To address these challenges, we propose Post-Training Quantization for Visual Mamba (PTQ4VM), which introduces two key strategies: Per-Token Static (PTS) quantization and Joint Learning of Smoothing Scale and Step Size (JLSS). To the our best knowledge, this is the first quantization study on Visual Mamba. PTQ4VM can be applied to various Visual Mamba backbones, converting the pretrained model to a quantized format in under 15 minutes without notable quality degradation. Extensive experiments on large-scale classification and regression tasks demonstrate its effectiveness, achieving up to 1.83x speedup on GPUs with negligible accuracy loss compared to FP16. Our code is available at https://github.com/YoungHyun197/ptq4vm.

Related articles: Most relevant | Search more
arXiv:2006.16669 [cs.CV] (Published 2020-06-30)
EasyQuant: Post-training Quantization via Scale Optimization
arXiv:2304.09785 [cs.CV] (Published 2023-04-19)
Post-Training Quantization for Object Detection
arXiv:2501.00124 [cs.CV] (Published 2024-12-30)
PQD: Post-training Quantization for Efficient Diffusion Models