arXiv Analytics

Sign in

arXiv:1809.09399 [cs.LG]AbstractReferencesReviewsResources

Non-Iterative Knowledge Fusion in Deep Convolutional Neural Networks

Mikhail Iu. Leontev, Viktoriia Islenteva, Sergey V. Sukhov

Published 2018-09-25Version 1

Incorporation of a new knowledge into neural networks with simultaneous preservation of the previous one is known to be a nontrivial problem. This problem becomes even more complex when new knowledge is contained not in new training examples, but inside the parameters (connection weights) of another neural network. Here we propose and test two methods allowing combining the knowledge contained in separate networks. One method is based on a simple operation of summation of weights of constituent neural networks. Another method assumes incorporation of a new knowledge by modification of weights nonessential for the preservation of already stored information. We show that with these methods the knowledge from one network can be transferred into another one non-iteratively without requiring training sessions. The fused network operates efficiently, performing classification far better than a chance level. The efficiency of the methods is quantified on several publicly available data sets in classification tasks both for shallow and deep neural networks.

Related articles: Most relevant | Search more
arXiv:2007.14285 [cs.LG] (Published 2020-07-28)
Theory of Deep Convolutional Neural Networks II: Spherical Analysis
arXiv:2106.12498 [cs.LG] (Published 2021-06-23)
Universal Consistency of Deep Convolutional Neural Networks
arXiv:1902.05967 [cs.LG] (Published 2019-02-15)
Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization