{ "id": "2407.10200", "version": "v1", "published": "2024-07-14T13:42:05.000Z", "updated": "2024-07-14T13:42:05.000Z", "title": "Shape2Scene: 3D Scene Representation Learning Through Pre-training on Shape Data", "authors": [ "Tuo Feng", "Wenguan Wang", "Ruijie Quan", "Yi Yang" ], "comment": "ECCV 2024; Project page: https://github.com/FengZicai/S2S", "categories": [ "cs.CV", "cs.AI" ], "abstract": "Current 3D self-supervised learning methods of 3D scenes face a data desert issue, resulting from the time-consuming and expensive collecting process of 3D scene data. Conversely, 3D shape datasets are easier to collect. Despite this, existing pre-training strategies on shape data offer limited potential for 3D scene understanding due to significant disparities in point quantities. To tackle these challenges, we propose Shape2Scene (S2S), a novel method that learns representations of large-scale 3D scenes from 3D shape data. We first design multiscale and high-resolution backbones for shape and scene level 3D tasks, i.e., MH-P (point-based) and MH-V (voxel-based). MH-P/V establishes direct paths to highresolution features that capture deep semantic information across multiple scales. This pivotal nature makes them suitable for a wide range of 3D downstream tasks that tightly rely on high-resolution features. We then employ a Shape-to-Scene strategy (S2SS) to amalgamate points from various shapes, creating a random pseudo scene (comprising multiple objects) for training data, mitigating disparities between shapes and scenes. Finally, a point-point contrastive loss (PPC) is applied for the pre-training of MH-P/V. In PPC, the inherent correspondence (i.e., point pairs) is naturally obtained in S2SS. Extensive experiments have demonstrated the transferability of 3D representations learned by MH-P/V across shape-level and scene-level 3D tasks. MH-P achieves notable performance on well-known point cloud datasets (93.8% OA on ScanObjectNN and 87.6% instance mIoU on ShapeNetPart). MH-V also achieves promising performance in 3D semantic segmentation and 3D object detection.", "revisions": [ { "version": "v1", "updated": "2024-07-14T13:42:05.000Z" } ], "analyses": { "keywords": [ "3d scene representation learning", "shape data", "data offer limited potential", "3d self-supervised learning methods", "shape2scene" ], "tags": [ "github project" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }