arXiv Analytics

Sign in

arXiv:2207.01996 [cond-mat.stat-mech]AbstractReferencesReviewsResources

Correlation between entropy and generalizability in a neural network

Ge Zhang

Published 2022-07-05Version 1

Although neural networks can solve very complex machine-learning problems, the theoretical reason for their generalizability is still not fully understood. Here we use Wang-Landau Mote Carlo algorithm to calculate the entropy (logarithm of the volume of a part of the parameter space) at a given test accuracy, and a given training loss function value or training accuracy. Our results show that entropical forces help generalizability. Although our study is on a very simple application of neural networks (a spiral dataset and a small, fully-connected neural network), our approach should be useful in explaining the generalizability of more complicated neural networks in future works.

Related articles: Most relevant | Search more
Dissipation, Correlation and Lags in Heat Engines
arXiv:cond-mat/9805330 (Published 1998-05-26)
Correlations for the Dyson Brownian motion model with Poisson initial conditions
arXiv:cond-mat/0410427 (Published 2004-10-17)
Correlations in mesoscopic magnetic systems