{ "id": "2504.11895", "version": "v2", "published": "2025-04-16T09:21:34.000Z", "updated": "2025-05-08T09:24:41.000Z", "title": "Search is All You Need for Few-shot Anomaly Detection", "authors": [ "Qishan Wang", "Jia Guo", "Shuyong Gao", "Haofen Wang", "Li Xiong", "Junjie Hu", "Hanqi Guo", "Wenqiang Zhang" ], "categories": [ "cs.CV" ], "abstract": "Few-shot anomaly detection (FSAD) has emerged as a crucial yet challenging task in industrial inspection, where normal distribution modeling must be accomplished with only a few normal images. While existing approaches typically employ multi-modal foundation models combining language and vision modalities for prompt-guided anomaly detection, these methods often demand sophisticated prompt engineering and extensive manual tuning. In this paper, we demonstrate that a straightforward nearest-neighbor search framework can surpass state-of-the-art performance in both single-class and multi-class FSAD scenarios. Our proposed method, VisionAD, consists of four simple yet essential components: (1) scalable vision foundation models that extract universal and discriminative features; (2) dual augmentation strategies - support augmentation to enhance feature matching adaptability and query augmentation to address the oversights of single-view prediction; (3) multi-layer feature integration that captures both low-frequency global context and high-frequency local details with minimal computational overhead; and (4) a class-aware visual memory bank enabling efficient one-for-all multi-class detection. Extensive evaluations across MVTec-AD, VisA, and Real-IAD benchmarks demonstrate VisionAD's exceptional performance. Using only 1 normal images as support, our method achieves remarkable image-level AUROC scores of 97.4%, 94.8%, and 70.8% respectively, outperforming current state-of-the-art approaches by significant margins (+1.6%, +3.2%, and +1.4%). The training-free nature and superior few-shot capabilities of VisionAD make it particularly appealing for real-world applications where samples are scarce or expensive to obtain. Code is available at https://github.com/Qiqigeww/VisionAD.", "revisions": [ { "version": "v2", "updated": "2025-05-08T09:24:41.000Z" } ], "analyses": { "keywords": [ "few-shot anomaly detection", "employ multi-modal foundation models", "demonstrate visionads exceptional performance", "benchmarks demonstrate visionads exceptional", "efficient one-for-all multi-class detection" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }