arXiv Analytics

Sign in

arXiv:1912.08526 [stat.ML]AbstractReferencesReviewsResources

Analytic expressions for the output evolution of a deep neural network

Anastasia Borovykh

Published 2019-12-18Version 1

We present a novel methodology based on a Taylor expansion of the network output for obtaining analytical expressions for the expected value of the network weights and output under stochastic training. Using these analytical expressions the effects of the hyperparameters and the noise variance of the optimization algorithm on the performance of the deep neural network are studied. In the early phases of training with a small noise coefficient, the output is equivalent to a linear model. In this case the network can generalize better due to the noise preventing the output from fully converging on the train data, however the noise does not result in any explicit regularization. In the later training stages, when higher order approximations are required, the impact of the noise becomes more significant, i.e. in a model which is non-linear in the weights noise can regularize the output function resulting in better generalization as witnessed by its influence on the weight Hessian, a commonly used metric for generalization capabilities.

Related articles: Most relevant | Search more
arXiv:2202.07679 [stat.ML] (Published 2022-02-15)
Taking a Step Back with KCal: Multi-Class Kernel-Based Calibration for Deep Neural Networks
arXiv:1907.02177 [stat.ML] (Published 2019-07-04)
Adaptive Approximation and Estimation of Deep Neural Network to Intrinsic Dimensionality
arXiv:1606.05018 [stat.ML] (Published 2016-06-16)
Improving Power Generation Efficiency using Deep Neural Networks