{ "id": "2011.12149", "version": "v1", "published": "2020-11-24T15:00:56.000Z", "updated": "2020-11-24T15:00:56.000Z", "title": "SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration", "authors": [ "Sheng Ao", "Qingyong Hu", "Bo Yang", "Andrew Markham", "Yulan Guo" ], "categories": [ "cs.CV", "cs.AI", "cs.LG", "cs.RO" ], "abstract": "Extracting robust and general 3D local features is key to downstream tasks such as point cloud registration and reconstruction. Existing learning-based local descriptors are either sensitive to rotation transformations, or rely on classical handcrafted features which are neither general nor representative. In this paper, we introduce a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features which are rotationally invariant whilst sufficiently informative to enable accurate registration. A Spatial Point Transformer is first introduced to map the input local surface into a carefully designed cylindrical space, enabling end-to-end optimization with SO(2) equivariant representation. A Neural Feature Extractor which leverages the powerful point-based and 3D cylindrical convolutional neural layers is then utilized to derive a compact and representative descriptor for matching. Extensive experiments on both indoor and outdoor datasets demonstrate that SpinNet outperforms existing state-of-the-art techniques by a large margin. More critically, it has the best generalization ability across unseen scenarios with different sensor modalities. The code is available at https://github.com/QingyongHu/SpinNet.", "revisions": [ { "version": "v1", "updated": "2020-11-24T15:00:56.000Z" } ], "analyses": { "keywords": [ "3d point cloud registration", "general surface descriptor", "cylindrical convolutional neural layers", "outperforms existing state-of-the-art techniques", "invariant whilst sufficiently informative" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }