arXiv Analytics

Sign in

arXiv:2109.01291 [cs.CV]AbstractReferencesReviewsResources

CAP-Net: Correspondence-Aware Point-view Fusion Network for 3D Shape Analysis

Xinwei He, Silin Cheng, Song Bai, Xiang Bai

Published 2021-09-03Version 1

Learning 3D representations by fusing point cloud and multi-view data has been proven to be fairly effective. While prior works typically focus on exploiting global features of the two modalities, in this paper we argue that more discriminative features can be derived by modeling "where to fuse". To investigate this, we propose a novel Correspondence-Aware Point-view Fusion Net (CAPNet). The core element of CAP-Net is a module named Correspondence-Aware Fusion (CAF) which integrates the local features of the two modalities based on their correspondence scores. We further propose to filter out correspondence scores with low values to obtain salient local correspondences, which reduces redundancy for the fusion process. In our CAP-Net, we utilize the CAF modules to fuse the multi-scale features of the two modalities both bidirectionally and hierarchically in order to obtain more informative features. Comprehensive evaluations on popular 3D shape benchmarks covering 3D object classification and retrieval show the superiority of the proposed framework.

Related articles: Most relevant | Search more
arXiv:1909.12887 [cs.CV] (Published 2019-09-27)
A Topological Nomenclature for 3D Shape Analysis in Connectomics
arXiv:2312.16477 [cs.CV] (Published 2023-12-27, updated 2023-12-30)
Group Multi-View Transformer for 3D Shape Analysis with Spatial Encoding
Lixiang Xu et al.
arXiv:1712.01537 [cs.CV] (Published 2017-12-05)
O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis