arXiv Analytics

Sign in

arXiv:2307.09988 [cs.LG]AbstractReferencesReviewsResources

TinyTrain: Deep Neural Network Training at the Extreme Edge

Young D. Kwon, Rui Li, Stylianos I. Venieris, Jagmohan Chauhan, Nicholas D. Lane, Cecilia Mascolo

Published 2023-07-19Version 1

On-device training is essential for user personalisation and privacy. With the pervasiveness of IoT devices and microcontroller units (MCU), this task becomes more challenging due to the constrained memory and compute resources, and the limited availability of labelled user data. Nonetheless, prior works neglect the data scarcity issue, require excessively long training time (e.g. a few hours), or induce substantial accuracy loss ($\geq$10\%). We propose TinyTrain, an on-device training approach that drastically reduces training time by selectively updating parts of the model and explicitly coping with data scarcity. TinyTrain introduces a task-adaptive sparse-update method that dynamically selects the layer/channel based on a multi-objective criterion that jointly captures user data, the memory, and the compute capabilities of the target device, leading to high accuracy on unseen tasks with reduced computation and memory footprint. TinyTrain outperforms vanilla fine-tuning of the entire network by 3.6-5.0\% in accuracy, while reducing the backward-pass memory and computation cost by up to 2,286$\times$ and 7.68$\times$, respectively. Targeting broadly used real-world edge devices, TinyTrain achieves 9.5$\times$ faster and 3.5$\times$ more energy-efficient training over status-quo approaches, and 2.8$\times$ smaller memory footprint than SOTA approaches, while remaining within the 1 MB memory envelope of MCU-grade platforms.

Related articles: Most relevant | Search more
arXiv:2102.00527 [cs.LG] (Published 2021-01-31)
Computational Performance Predictions for Deep Neural Network Training: A Runtime-Based Approach
arXiv:1603.07341 [cs.LG] (Published 2016-03-23)
Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices
arXiv:1707.04822 [cs.LG] (Published 2017-07-16)
Normalized Gradient with Adaptive Stepsize Method for Deep Neural Network Training