arXiv Analytics

Sign in

arXiv:1804.06516 [cs.CV]AbstractReferencesReviewsResources

Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization

Jonathan Tremblay, Aayush Prakash, David Acuna, Mark Brophy, Varun Jampani, Cem Anil, Thang To, Eric Cameracci, Shaad Boochoon, Stan Birchfield

Published 2018-04-18Version 1

We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator$-$such as lighting, pose, object textures, etc.$-$are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds$-$both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.

Comments: CVPR 2018 Workshop on Autonomous Driving
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2011.08517 [cs.CV] (Published 2020-11-17)
Bridging the Performance Gap Between Pose Estimation Networks Trained on Real And Synthetic Data Using Domain Randomization
arXiv:2104.02815 [cs.CV] (Published 2021-04-06)
On the Applicability of Synthetic Data for Face Recognition
arXiv:2108.07960 [cs.CV] (Published 2021-08-18)
SynFace: Face Recognition with Synthetic Data