arXiv Analytics

Sign in

arXiv:2107.01760 [cs.LG]AbstractReferencesReviewsResources

Single Model for Influenza Forecasting of Multiple Countries by Multi-task Learning

Taichi Murayama, Shoko Wakamiya, Eiji Aramaki

Published 2021-07-05Version 1

The accurate forecasting of infectious epidemic diseases such as influenza is a crucial task undertaken by medical institutions. Although numerous flu forecasting methods and models based mainly on historical flu activity data and online user-generated contents have been proposed in previous studies, no flu forecasting model targeting multiple countries using two types of data exists at present. Our paper leverages multi-task learning to tackle the challenge of building one flu forecasting model targeting multiple countries; each country as each task. Also, to develop the flu prediction model with higher performance, we solved two issues; finding suitable search queries, which are part of the user-generated contents, and how to leverage search queries efficiently in the model creation. For the first issue, we propose the transfer approaches from English to other languages. For the second issue, we propose a novel flu forecasting model that takes advantage of search queries using an attention mechanism and extend the model to a multi-task model for multiple countries' flu forecasts. Experiments on forecasting flu epidemics in five countries demonstrate that our model significantly improved the performance by leveraging the search queries and multi-task learning compared to the baselines.

Comments: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2021
Categories: cs.LG, cs.AI
Related articles: Most relevant | Search more
arXiv:2103.02546 [cs.LG] (Published 2021-03-03)
Multi-task Learning by Leveraging the Semantic Information
arXiv:1203.3536 [cs.LG] (Published 2012-03-15)
A Convex Formulation for Learning Task Relationships in Multi-Task Learning
arXiv:2302.06354 [cs.LG] (Published 2023-02-13)
SubTuning: Efficient Finetuning for Multi-Task Learning