arXiv Analytics

Sign in

arXiv:1908.06134 [cs.LG]AbstractReferencesReviewsResources

Online Feature Selection for Activity Recognition using Reinforcement Learning with Multiple Feedback

Taku Yamagata, Raúl Santos-Rodríguez, Ryan McConville, Atis Elsts

Published 2019-08-16Version 1

Recent advances in both machine learning and Internet-of-Things have attracted attention to automatic Activity Recognition, where users wear a device with sensors and their outputs are mapped to a predefined set of activities. However, few studies have considered the balance between wearable power consumption and activity recognition accuracy. This is particularly important when part of the computational load happens on the wearable device. In this paper, we present a new methodology to perform feature selection on the device based on Reinforcement Learning (RL) to find the optimum balance between power consumption and accuracy. To accelerate the learning speed, we extend the RL algorithm to address multiple sources of feedback, and use them to tailor the policy in conjunction with estimating the feedback accuracy. We evaluated our system on the SPHERE challenge dataset, a publicly available research dataset. The results show that our proposed method achieves a good trade-off between wearable power consumption and activity recognition accuracy.

Related articles: Most relevant | Search more
arXiv:1902.01729 [cs.LG] (Published 2019-02-05)
Robust Regression via Online Feature Selection under Adversarial Data Corruption
arXiv:1706.04711 [cs.LG] (Published 2017-06-15)
Reinforcement Learning under Model Mismatch
arXiv:1301.0601 [cs.LG] (Published 2012-12-12)
Reinforcement Learning with Partially Known World Dynamics