{ "id": "2301.01572", "version": "v1", "published": "2023-01-04T12:48:05.000Z", "updated": "2023-01-04T12:48:05.000Z", "title": "Multi-Task Learning with Prior Information", "authors": [ "Mengyuan Zhang", "Kai Liu" ], "comment": "Accepted to SDM'23", "categories": [ "cs.LG", "cs.AI" ], "abstract": "Multi-task learning aims to boost the generalization performance of multiple related tasks simultaneously by leveraging information contained in those tasks. In this paper, we propose a multi-task learning framework, where we utilize prior knowledge about the relations between features. We also impose a penalty on the coefficients changing for each specific feature to ensure related tasks have similar coefficients on common features shared among them. In addition, we capture a common set of features via group sparsity. The objective is formulated as a non-smooth convex optimization problem, which can be solved with various methods, including gradient descent method with fixed stepsize, iterative shrinkage-thresholding algorithm (ISTA) with back-tracking, and its variation -- fast iterative shrinkage-thresholding algorithm (FISTA). In light of the sub-linear convergence rate of the methods aforementioned, we propose an asymptotically linear convergent algorithm with theoretical guarantee. Empirical experiments on both regression and classification tasks with real-world datasets demonstrate that our proposed algorithms are capable of improving the generalization performance of multiple related tasks.", "revisions": [ { "version": "v1", "updated": "2023-01-04T12:48:05.000Z" } ], "analyses": { "keywords": [ "multi-task learning", "prior information", "multiple related tasks", "non-smooth convex optimization problem", "generalization performance" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }