{ "id": "2207.01996", "version": "v1", "published": "2022-07-05T12:28:13.000Z", "updated": "2022-07-05T12:28:13.000Z", "title": "Correlation between entropy and generalizability in a neural network", "authors": [ "Ge Zhang" ], "categories": [ "cond-mat.stat-mech", "cs.LG" ], "abstract": "Although neural networks can solve very complex machine-learning problems, the theoretical reason for their generalizability is still not fully understood. Here we use Wang-Landau Mote Carlo algorithm to calculate the entropy (logarithm of the volume of a part of the parameter space) at a given test accuracy, and a given training loss function value or training accuracy. Our results show that entropical forces help generalizability. Although our study is on a very simple application of neural networks (a spiral dataset and a small, fully-connected neural network), our approach should be useful in explaining the generalizability of more complicated neural networks in future works.", "revisions": [ { "version": "v1", "updated": "2022-07-05T12:28:13.000Z" } ], "analyses": { "keywords": [ "correlation", "wang-landau mote carlo algorithm", "training loss function value", "entropical forces help generalizability", "test accuracy" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }