arXiv Analytics

Sign in

arXiv:1707.08114 [cs.LG]AbstractReferencesReviewsResources

A Survey on Multi-Task Learning

Yu Zhang, Qiang Yang

Published 2017-07-25Version 1

Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories: feature learning approach, low-rank approach, task clustering approach, task relation learning approach, dirty approach, multi-level approach and deep learning approach. In order to compare different approaches, we discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as feature hashing are reviewed to reveal the computational and storage advantages. Many real-world applications use MTL to boost their performance and we introduce some representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.

Related articles: Most relevant | Search more
arXiv:2107.01760 [cs.LG] (Published 2021-07-05)
Single Model for Influenza Forecasting of Multiple Countries by Multi-task Learning
arXiv:1203.3536 [cs.LG] (Published 2012-03-15)
A Convex Formulation for Learning Task Relationships in Multi-Task Learning
arXiv:2301.01572 [cs.LG] (Published 2023-01-04)
Multi-Task Learning with Prior Information