arXiv Analytics

Sign in

arXiv:1607.02241 [cs.LG]AbstractReferencesReviewsResources

Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks

Darryl D. Lin, Sachin S. Talathi

Published 2016-07-08Version 1

It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the well-accepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.

Comments: ICML2016 - Workshop on On-Device Intelligence
Categories: cs.LG, cs.CV
Related articles: Most relevant | Search more
arXiv:1812.10386 [cs.LG] (Published 2018-12-26)
ECG Segmentation by Neural Networks: Errors and Correction
arXiv:1808.05819 [cs.LG] (Published 2018-08-17)
Motion Prediction of Traffic Actors for Autonomous Driving using Deep Convolutional Networks
arXiv:1904.03837 [cs.LG] (Published 2019-04-08)
Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated Structure