{ "id": "2307.15381", "version": "v1", "published": "2023-07-28T08:05:36.000Z", "updated": "2023-07-28T08:05:36.000Z", "title": "AffineGlue: Joint Matching and Robust Estimation", "authors": [ "Daniel Barath", "Dmytro Mishkin", "Luca Cavalli", "Paul-Edouard Sarlin", "Petr Hruby", "Marc Pollefeys" ], "categories": [ "cs.CV" ], "abstract": "We propose AffineGlue, a method for joint two-view feature matching and robust estimation that reduces the combinatorial complexity of the problem by employing single-point minimal solvers. AffineGlue selects potential matches from one-to-many correspondences to estimate minimal models. Guided matching is then used to find matches consistent with the model, suffering less from the ambiguities of one-to-one matches. Moreover, we derive a new minimal solver for homography estimation, requiring only a single affine correspondence (AC) and a gravity prior. Furthermore, we train a neural network to reject ACs that are unlikely to lead to a good model. AffineGlue is superior to the SOTA on real-world datasets, even when assuming that the gravity direction points downwards. On PhotoTourism, the AUC@10{\\deg} score is improved by 6.6 points compared to the SOTA. On ScanNet, AffineGlue makes SuperPoint and SuperGlue achieve similar accuracy as the detector-free LoFTR.", "revisions": [ { "version": "v1", "updated": "2023-07-28T08:05:36.000Z" } ], "analyses": { "keywords": [ "robust estimation", "joint matching", "superglue achieve similar accuracy", "affineglue selects potential matches", "joint two-view feature" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }