arXiv Analytics

Sign in

arXiv:2006.09791 [cs.LG]AbstractReferencesReviewsResources

Optimizing Grouped Convolutions on Edge Devices

Perry Gibson, José Cano, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

Published 2020-06-17Version 1

When deploying a deep neural network on constrained hardware, it is possible to replace the network's standard convolutions with grouped convolutions. This allows for substantial memory savings with minimal loss of accuracy. However, current implementations of grouped convolutions in modern deep learning frameworks are far from performing optimally in terms of speed. In this paper we propose Grouped Spatial Pack Convolutions (GSPC), a new implementation of grouped convolutions that outperforms existing solutions. We implement GSPC in TVM, which provides state-of-the-art performance on edge devices. We analyze a set of networks utilizing different types of grouped convolutions and evaluate their performance in terms of inference time on several edge devices. We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by 3.4x, 8x and 4x on average respectively. Code is available at https://github.com/gecLAB/tvm-GSPC/

Comments: Camera ready version to be published at ASAP 2020 - The 31st IEEE International Conference on Application-specific Systems, Architectures and Processors. 8 pages, 6 figures
Categories: cs.LG, cs.CV, cs.DC, stat.ML
Subjects: I.2.6, D.3.4, C.1.4
Related articles: Most relevant | Search more
arXiv:2309.03569 [cs.LG] (Published 2023-09-07)
Sparse Federated Training of Object Detection in the Internet of Vehicles
arXiv:2201.10947 [cs.LG] (Published 2022-01-22)
Enabling Deep Learning on Edge Devices through Filter Pruning and Knowledge Transfer
arXiv:2210.03204 [cs.LG] (Published 2022-10-06)
Enabling Deep Learning on Edge Devices