arXiv Analytics

Sign in

arXiv:1704.05939 [cs.CV]AbstractReferencesReviewsResources

HPatches: A benchmark and evaluation of handcrafted and learned local descriptors

Vassileios Balntas, Karel Lenc, Andrea Vedaldi, Krystian Mikolajczyk

Published 2017-04-19Version 1

In this paper, we propose a novel benchmark for evaluating local image descriptors. We demonstrate that the existing datasets and evaluation protocols do not specify unambiguously all aspects of evaluation, leading to ambiguities and inconsistencies in results reported in the literature. Furthermore, these datasets are nearly saturated due to the recent improvements in local descriptors obtained by learning them from large annotated datasets. Therefore, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and classification. This allows for more realistic, and thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors can boost their performance to the level of deep learning based descriptors within a realistic benchmarks evaluation.

Related articles: Most relevant | Search more
arXiv:2412.09715 [cs.CV] (Published 2024-12-12)
Human vs. AI: A Novel Benchmark and a Comparative Study on the Detection of Generated Images and the Impact of Prompts
arXiv:2409.15953 [cs.CV] (Published 2024-09-24)
Mind the Prompt: A Novel Benchmark for Prompt-based Class-Agnostic Counting
arXiv:2311.02538 [cs.CV] (Published 2023-11-05)
Dense Video Captioning: A Survey of Techniques, Datasets and Evaluation Protocols