arXiv Analytics

Sign in

arXiv:1806.06850 [cs.LG]AbstractReferencesReviewsResources

Polynomial Regression As an Alternative to Neural Nets

Xi Cheng, Bohdan Khomtchouk, Norman Matloff, Pete Mohanty

Published 2018-06-13Version 1

Despite the success of neural networks (NNs), there is still a concern among many over their "black box" nature. Why do they work? Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models. This view will have various implications for NNs, e.g. providing an explanation for why convergence problems arise in NNs, and it gives rough guidance on avoiding overfitting. In addition, we use this phenomenon to predict and confirm a multicollinearity property of NNs not previously reported in the literature. Most importantly, given this loose correspondence, one may choose to routinely use polynomial models instead of NNs, thus avoiding some major problems of the latter, such as having to set many tuning parameters and dealing with convergence issues. We present a number of empirical results; in each case, the accuracy of the polynomial approach matches or exceeds that of NN approaches. A many-featured, open-source software package, polyreg, is available.

Related articles: Most relevant | Search more
arXiv:2006.14606 [cs.LG] (Published 2020-06-25)
Global Convergence and Induced Kernels of Gradient-Based Meta-Learning with Neural Nets
arXiv:2401.01869 [cs.LG] (Published 2024-01-03)
On the hardness of learning under symmetries
arXiv:2108.12124 [cs.LG] (Published 2021-08-27)
Canoe : A System for Collaborative Learning for Neural Nets