arXiv Analytics

Sign in

arXiv:1908.02893 [cs.CV]AbstractReferencesReviewsResources

EdgeNet: Semantic Scene Completion from RGB-D images

Aloisio Dourado, Teofilo Emidio de Campos, Hansung Kim, Adrian Hilton

Published 2019-08-08Version 1

Semantic scene completion is the task of predicting a complete 3D representation of volumetric occupancy with corresponding semantic labels for a scene from a single point of view. Previous works on Semantic Scene Completion from RGB-D data used either only depth or depth with colour by projecting the 2D image into the 3D volume resulting in a sparse data representation. In this work, we present a new strategy to encode colour information in 3D space using edge detection and flipped truncated signed distance. We also present EdgeNet, a new end-to-end neural network architecture capable of handling features generated from the fusion of depth and edge information. Experimental results show improvement of 6.9% over the state-of-the-art result on real data, for end-to-end approaches.

Related articles: Most relevant | Search more
arXiv:1312.7715 [cs.CV] (Published 2013-12-30, updated 2014-07-31)
Constrained Parametric Proposals and Pooling Methods for Semantic Segmentation in RGB-D Images
arXiv:2403.08885 [cs.CV] (Published 2024-03-13)
SLCF-Net: Sequential LiDAR-Camera Fusion for Semantic Scene Completion using a 3D Recurrent U-Net
arXiv:2403.07560 [cs.CV] (Published 2024-03-12)
Unleashing Network Potentials for Semantic Scene Completion