arXiv Analytics

Sign in

arXiv:1903.10297 [cs.CV]AbstractReferencesReviewsResources

CoSegNet: Deep Co-Segmentation of 3D Shapes with Group Consistency Loss

Chenyang Zhu, Kai Xu, Siddhartha Chaudhuri, Li Yi, Leonidas Guibas, Hao Zhang

Published 2019-03-25Version 1

We introduce CoSegNet, a deep neural network architecture for co-segmentation of a set of 3D shapes represented as point clouds. CoSegNet takes as input a set of unsegmented shapes, proposes per-shape parts, and then jointly optimizes the part labelings across the set subjected to a novel group consistency loss expressed via matrix rank estimates. The proposals are refined in each iteration by an auxiliary network that acts as a weak regularizing prior, pre-trained to denoise noisy, unlabeled parts from a large collection of segmented 3D shapes, where the part compositions within the same object category can be highly inconsistent. The output is a consistent part labeling for the input set, with each shape segmented into up to K (a user-specified hyperparameter) parts. The overall pipeline is thus weakly supervised, producing consistent segmentations tailored to the test set, without consistent ground-truth segmentations. We show qualitative and quantitative results from CoSegNet and evaluate it via ablation studies and comparisons to state-of-the-art co-segmentation methods.

Related articles: Most relevant | Search more
arXiv:2006.07982 [cs.CV] (Published 2020-06-14)
ShapeFlow: Learnable Deformations Among 3D Shapes
arXiv:2302.01721 [cs.CV] (Published 2023-02-03)
TEXTure: Text-Guided Texturing of 3D Shapes
arXiv:1903.03911 [cs.CV] (Published 2019-03-10)
Shape2Motion: Joint Analysis of Motion Parts and Attributes from 3D Shapes