{ "id": "1805.09214", "version": "v1", "published": "2018-05-23T15:13:56.000Z", "updated": "2018-05-23T15:13:56.000Z", "title": "A Unified Framework for Training Neural Networks", "authors": [ "Hadi Ghauch", "Hossein Shokri-Ghadikolaei", "Carlo Fischione", "Mikael Skoglund" ], "comment": "15 pages, submitted to NIPS 2018", "categories": [ "cs.LG", "stat.ML" ], "abstract": "The lack of mathematical tractability of Deep Neural Networks (DNNs) has hindered progress towards having a unified convergence analysis of training algorithms, in the general setting. We propose a unified optimization framework for training different types of DNNs, and establish its convergence for arbitrary loss, activation, and regularization functions, assumed to be smooth. We show that framework generalizes well-known first- and second-order training methods, and thus allows us to show the convergence of these methods for various DNN architectures and learning tasks, as a special case of our approach. We discuss some of its applications in training various DNN architectures (e.g., feed-forward, convolutional, linear networks), to regression and classification tasks.", "revisions": [ { "version": "v1", "updated": "2018-05-23T15:13:56.000Z" } ], "analyses": { "keywords": [ "training neural networks", "unified framework", "dnn architectures", "deep neural networks", "framework generalizes well-known" ], "note": { "typesetting": "TeX", "pages": 15, "language": "en", "license": "arXiv", "status": "editable" } } }