arXiv Analytics

Sign in

arXiv:2301.01572 [cs.LG]AbstractReferencesReviewsResources

Multi-Task Learning with Prior Information

Mengyuan Zhang, Kai Liu

Published 2023-01-04Version 1

Multi-task learning aims to boost the generalization performance of multiple related tasks simultaneously by leveraging information contained in those tasks. In this paper, we propose a multi-task learning framework, where we utilize prior knowledge about the relations between features. We also impose a penalty on the coefficients changing for each specific feature to ensure related tasks have similar coefficients on common features shared among them. In addition, we capture a common set of features via group sparsity. The objective is formulated as a non-smooth convex optimization problem, which can be solved with various methods, including gradient descent method with fixed stepsize, iterative shrinkage-thresholding algorithm (ISTA) with back-tracking, and its variation -- fast iterative shrinkage-thresholding algorithm (FISTA). In light of the sub-linear convergence rate of the methods aforementioned, we propose an asymptotically linear convergent algorithm with theoretical guarantee. Empirical experiments on both regression and classification tasks with real-world datasets demonstrate that our proposed algorithms are capable of improving the generalization performance of multiple related tasks.

Related articles: Most relevant | Search more
arXiv:1712.08164 [cs.LG] (Published 2017-12-21)
Multi-task learning of time series and its application to the travel demand
arXiv:2306.07737 [cs.LG] (Published 2023-06-13)
Robustness and Generalization Performance of Deep Learning Models on Cyber-Physical Systems: A Comparative Study
arXiv:1809.10336 [cs.LG] (Published 2018-09-27)
Multi-task Learning for Financial Forecasting