arXiv Analytics

Sign in

arXiv:2301.13370 [cs.LG]AbstractReferencesReviewsResources

On the Correctness of Automatic Differentiation for Neural Networks with Machine-Representable Parameters

Wonyeol Lee, Sejun Park, Alex Aiken

Published 2023-01-31Version 1

Recent work has shown that automatic differentiation over the reals is almost always correct in a mathematically precise sense. However, actual programs work with machine-representable numbers (e.g., floating-point numbers), not reals. In this paper, we study the correctness of automatic differentiation when the parameter space of a neural network consists solely of machine-representable numbers. For a neural network with bias parameters, we prove that automatic differentiation is correct at all parameters where the network is differentiable. In contrast, it is incorrect at all parameters where the network is non-differentiable, since it never informs non-differentiability. To better understand this non-differentiable set of parameters, we prove a tight bound on its size, which is linear in the number of non-differentiabilities in activation functions, and provide a simple necessary and sufficient condition for a parameter to be in this set. We further prove that automatic differentiation always computes a Clarke subderivative, even on the non-differentiable set. We also extend these results to neural networks possibly without bias parameters.

Related articles: Most relevant | Search more
arXiv:2006.06903 [cs.LG] (Published 2020-06-12)
On Correctness of Automatic Differentiation for Non-Differentiable Functions
arXiv:1810.11530 [cs.LG] (Published 2018-10-26)
Automatic differentiation in ML: Where we are and where we should be going
arXiv:2006.02080 [cs.LG] (Published 2020-06-03)
A mathematical model for automatic differentiation in machine learning