arXiv Analytics

Sign in

arXiv:2103.08325 [cs.LG]AbstractReferencesReviewsResources

HSCoNAS: Hardware-Software Co-Design of Efficient DNNs via Neural Architecture Search

Xiangzhong Luo, Di Liu, Shuo Huai, Weichen Liu

Published 2021-03-11Version 1

In this paper, we present a novel multi-objective hardware-aware neural architecture search (NAS) framework, namely HSCoNAS, to automate the design of deep neural networks (DNNs) with high accuracy but low latency upon target hardware. To accomplish this goal, we first propose an effective hardware performance modeling method to approximate the runtime latency of DNNs on target hardware, which will be integrated into HSCoNAS to avoid the tedious on-device measurements. Besides, we propose two novel techniques, i.e., dynamic channel scaling to maximize the accuracy under the specified latency and progressive space shrinking to refine the search space towards target hardware as well as alleviate the search overheads. These two techniques jointly work to allow HSCoNAS to perform fine-grained and efficient explorations. Finally, an evolutionary algorithm (EA) is incorporated to conduct the architecture search. Extensive experiments on ImageNet are conducted upon diverse target hardware, i.e., GPU, CPU, and edge device to demonstrate the superiority of HSCoNAS over recent state-of-the-art approaches.

Related articles: Most relevant | Search more
arXiv:2012.08906 [cs.LG] (Published 2020-12-16)
Multi-Task Learning in Diffractive Deep Neural Networks via Hardware-Software Co-design
arXiv:2206.00843 [cs.LG] (Published 2022-06-02)
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Yonggan Fu et al.
arXiv:1905.07346 [cs.LG] (Published 2019-05-17)
EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices