arXiv Analytics

Sign in

arXiv:1505.03205 [cs.CV]AbstractReferencesReviewsResources

Leveraging Image based Prior for Visual Place Recognition

Tsukamoto Taisho, Tanaka Kanji

Published 2015-05-13Version 1

In this study, we propose a novel scene descriptor for visual place recognition. Unlike popular bag-of-words scene descriptors which rely on a library of vector quantized visual features, our proposed descriptor is based on a library of raw image data, such as publicly available photo collections from Google StreetView and Flickr. The library images need not to be associated with spatial information regarding the viewpoint and orientation of the scene. As a result, these images are cheaper than the database images; in addition, they are readily available. Our proposed descriptor directly mines the image library to discover landmarks (i.e., image patches) that suitably match an input query/database image. The discovered landmarks are then compactly described by their pose and shape (i.e., library image ID, bounding boxes) and used as a compact discriminative scene descriptor for the input image. We evaluate the effectiveness of our scene description framework by comparing its performance to that of previous approaches.

Comments: 8 pages, 6 figures, preprint. Accepted for publication in MVA2015 (oral presentation)
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1608.04274 [cs.CV] (Published 2016-08-15)
Visual place recognition using landmark distribution descriptors
arXiv:2412.06153 [cs.CV] (Published 2024-12-09, updated 2025-06-27)
A Hyperdimensional One Place Signature to Represent Them All: Stackable Descriptors For Visual Place Recognition
arXiv:1803.04228 [cs.CV] (Published 2018-03-12)
Omnidirectional CNN for Visual Place Recognition and Navigation