arXiv Analytics

Sign in

arXiv:2008.06808 [cs.LG]AbstractReferencesReviewsResources

Finding Fast Transformers: One-Shot Neural Architecture Search by Component Composition

Henry Tsai, Jayden Ooi, Chun-Sung Ferng, Hyung Won Chung, Jason Riesa

Published 2020-08-15Version 1

Transformer-based models have achieved stateof-the-art results in many tasks in natural language processing. However, such models are usually slow at inference time, making deployment difficult. In this paper, we develop an efficient algorithm to search for fast models while maintaining model quality. We describe a novel approach to decompose the Transformer architecture into smaller components, and propose a sampling-based one-shot architecture search method to find an optimal model for inference. The model search process is more efficient than alternatives, adding only a small overhead to training time. By applying our methods to BERT-base architectures, we achieve 10% to 30% speedup for pre-trained BERT and 70% speedup on top of a previous state-of-the-art distilled BERT model on Cloud TPU-v2 with a generally acceptable drop in performance.

Related articles: Most relevant | Search more
arXiv:1906.02869 [cs.LG] (Published 2019-06-07)
One-Shot Neural Architecture Search via Compressive Sensing
arXiv:1906.09557 [cs.LG] (Published 2019-06-23)
One-Shot Neural Architecture Search Through A Posteriori Distribution Guided Sampling
arXiv:2006.09264 [cs.LG] (Published 2020-06-12)
Bonsai-Net: One-Shot Neural Architecture Search via Differentiable Pruners