arXiv Analytics

Sign in

arXiv:1907.05193 [cs.CV]AbstractReferencesReviewsResources

Cross-Domain Complementary Learning with Synthetic Data for Multi-Person Part Segmentation

Kevin Lin, Lijuan Wang, Kun Luo, Yinpeng Chen, Zicheng Liu, Ming-Ting Sun

Published 2019-07-11Version 1

The success of supervised deep learning depends on the training labels. However, data labeling at pixel-level is very expensive, and people have been exploring synthetic data as an alternative. Even though it is easy to generate labels for synthetic data, the quality gap makes it challenging to transfer knowledge from synthetic data to real data. In this paper, we propose a novel technique, called cross-domain complementary learning that takes advantage of the rich variations of real data and the easily obtainable labels of synthetic data to learn multi-person part segmentation on real images without any human-annotated segmentation labels. To make sure the synthetic data and real data are aligned in a common latent space, we use an auxiliary task of human pose estimation to bridge the two domains. Without any real part segmentation training data, our method performs comparably to several supervised state-of-the-art approaches which require real part segmentation training data on Pascal-Person-Parts and COCO-DensePose datasets. We further demonstrate the generalizability of our method on predicting novel keypoints in the wild where no real data labels are available for the novel keypoints.

Related articles: Most relevant | Search more
arXiv:2204.07069 [cs.CV] (Published 2022-04-14)
Panoptic Segmentation using Synthetic and Real Data
arXiv:1509.05463 [cs.CV] (Published 2015-09-17)
Learning from Synthetic Data Using a Stacked Multichannel Autoencoder
arXiv:1507.03196 [cs.CV] (Published 2015-07-12)
DeepFont: Identify Your Font from An Image