Cargando…

Efficient multi-task learning with adaptive temporal structure for progression prediction

In this paper, we propose a novel efficient multi-task learning formulation for the class of progression problems in which its state will continuously change over time. To use the shared knowledge information between multiple tasks to improve performance, existing multi-task learning methods mainly...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Menghui, Zhang, Yu, Liu, Tong, Yang, Yun, Yang, Po
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer London 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10171734/
https://www.ncbi.nlm.nih.gov/pubmed/37362567
http://dx.doi.org/10.1007/s00521-023-08461-9
_version_ 1785039485758603264
author Zhou, Menghui
Zhang, Yu
Liu, Tong
Yang, Yun
Yang, Po
author_facet Zhou, Menghui
Zhang, Yu
Liu, Tong
Yang, Yun
Yang, Po
author_sort Zhou, Menghui
collection PubMed
description In this paper, we propose a novel efficient multi-task learning formulation for the class of progression problems in which its state will continuously change over time. To use the shared knowledge information between multiple tasks to improve performance, existing multi-task learning methods mainly focus on feature selection or optimizing the task relation structure. The feature selection methods usually fail to explore the complex relationship between tasks and thus have limited performance. The methods centring on optimizing the relation structure of tasks are not capable of selecting meaningful features and have a bi-convex objective function which results in high computation complexity of the associated optimization algorithm. Unlike these multi-task learning methods, motivated by a simple and direct idea that the state of a system at the current time point should be related to all previous time points, we first propose a novel relation structure, termed adaptive global temporal relation structure (AGTS). Then we integrate the widely used sparse group Lasso, fused Lasso with AGTS to propose a novel convex multi-task learning formulation that not only performs feature selection but also adaptively captures the global temporal task relatedness. Since the existence of three non-smooth penalties, the objective function is challenging to solve. We first design an optimization algorithm based on the alternating direction method of multipliers (ADMM). Considering that the worst-case convergence rate of ADMM is only sub-linear, we then devise an efficient algorithm based on the accelerated gradient method which has the optimal convergence rate among first-order methods. We show the proximal operator of several non-smooth penalties can be solved efficiently due to the special structure of our formulation. Experimental results on four real-world datasets demonstrate that our approach not only outperforms multiple baseline MTL methods in terms of effectiveness but also has high efficiency.
format Online
Article
Text
id pubmed-10171734
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer London
record_format MEDLINE/PubMed
spelling pubmed-101717342023-05-11 Efficient multi-task learning with adaptive temporal structure for progression prediction Zhou, Menghui Zhang, Yu Liu, Tong Yang, Yun Yang, Po Neural Comput Appl Original Article In this paper, we propose a novel efficient multi-task learning formulation for the class of progression problems in which its state will continuously change over time. To use the shared knowledge information between multiple tasks to improve performance, existing multi-task learning methods mainly focus on feature selection or optimizing the task relation structure. The feature selection methods usually fail to explore the complex relationship between tasks and thus have limited performance. The methods centring on optimizing the relation structure of tasks are not capable of selecting meaningful features and have a bi-convex objective function which results in high computation complexity of the associated optimization algorithm. Unlike these multi-task learning methods, motivated by a simple and direct idea that the state of a system at the current time point should be related to all previous time points, we first propose a novel relation structure, termed adaptive global temporal relation structure (AGTS). Then we integrate the widely used sparse group Lasso, fused Lasso with AGTS to propose a novel convex multi-task learning formulation that not only performs feature selection but also adaptively captures the global temporal task relatedness. Since the existence of three non-smooth penalties, the objective function is challenging to solve. We first design an optimization algorithm based on the alternating direction method of multipliers (ADMM). Considering that the worst-case convergence rate of ADMM is only sub-linear, we then devise an efficient algorithm based on the accelerated gradient method which has the optimal convergence rate among first-order methods. We show the proximal operator of several non-smooth penalties can be solved efficiently due to the special structure of our formulation. Experimental results on four real-world datasets demonstrate that our approach not only outperforms multiple baseline MTL methods in terms of effectiveness but also has high efficiency. Springer London 2023-05-10 /pmc/articles/PMC10171734/ /pubmed/37362567 http://dx.doi.org/10.1007/s00521-023-08461-9 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Zhou, Menghui
Zhang, Yu
Liu, Tong
Yang, Yun
Yang, Po
Efficient multi-task learning with adaptive temporal structure for progression prediction
title Efficient multi-task learning with adaptive temporal structure for progression prediction
title_full Efficient multi-task learning with adaptive temporal structure for progression prediction
title_fullStr Efficient multi-task learning with adaptive temporal structure for progression prediction
title_full_unstemmed Efficient multi-task learning with adaptive temporal structure for progression prediction
title_short Efficient multi-task learning with adaptive temporal structure for progression prediction
title_sort efficient multi-task learning with adaptive temporal structure for progression prediction
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10171734/
https://www.ncbi.nlm.nih.gov/pubmed/37362567
http://dx.doi.org/10.1007/s00521-023-08461-9
work_keys_str_mv AT zhoumenghui efficientmultitasklearningwithadaptivetemporalstructureforprogressionprediction
AT zhangyu efficientmultitasklearningwithadaptivetemporalstructureforprogressionprediction
AT liutong efficientmultitasklearningwithadaptivetemporalstructureforprogressionprediction
AT yangyun efficientmultitasklearningwithadaptivetemporalstructureforprogressionprediction
AT yangpo efficientmultitasklearningwithadaptivetemporalstructureforprogressionprediction