Cargando…

Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning

In most of the existing multi-task learning (MTL) models, multiple tasks’ public information is learned by sharing parameters across hidden layers, such as hard sharing, soft sharing, and hierarchical sharing. One promising approach is to introduce model pruning into information learning, such as sp...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Ying, Yu, Jiong, Zhao, Yutong, Chen, Jiaying, Du, Xusheng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8947268/
https://www.ncbi.nlm.nih.gov/pubmed/35327942
http://dx.doi.org/10.3390/e24030432
_version_ 1784674399120523264
author Chen, Ying
Yu, Jiong
Zhao, Yutong
Chen, Jiaying
Du, Xusheng
author_facet Chen, Ying
Yu, Jiong
Zhao, Yutong
Chen, Jiaying
Du, Xusheng
author_sort Chen, Ying
collection PubMed
description In most of the existing multi-task learning (MTL) models, multiple tasks’ public information is learned by sharing parameters across hidden layers, such as hard sharing, soft sharing, and hierarchical sharing. One promising approach is to introduce model pruning into information learning, such as sparse sharing, which is regarded as being outstanding in knowledge transferring. However, the above method performs inefficiently in conflict tasks, with inadequate learning of tasks’ private information, or through suffering from negative transferring. In this paper, we propose a multi-task learning model (Pruning-Based Feature Sharing, PBFS) that merges a soft parameter sharing structure with model pruning and adds a prunable shared network among different task-specific subnets. In this way, each task can select parameters in a shared subnet, according to its requirements. Experiments are conducted on three benchmark public datasets and one synthetic dataset; the impact of the different subnets’ sparsity and tasks’ correlations to the model performance is analyzed. Results show that the proposed model’s information sharing strategy is helpful to transfer learning and superior to the several comparison models.
format Online
Article
Text
id pubmed-8947268
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89472682022-03-25 Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning Chen, Ying Yu, Jiong Zhao, Yutong Chen, Jiaying Du, Xusheng Entropy (Basel) Article In most of the existing multi-task learning (MTL) models, multiple tasks’ public information is learned by sharing parameters across hidden layers, such as hard sharing, soft sharing, and hierarchical sharing. One promising approach is to introduce model pruning into information learning, such as sparse sharing, which is regarded as being outstanding in knowledge transferring. However, the above method performs inefficiently in conflict tasks, with inadequate learning of tasks’ private information, or through suffering from negative transferring. In this paper, we propose a multi-task learning model (Pruning-Based Feature Sharing, PBFS) that merges a soft parameter sharing structure with model pruning and adds a prunable shared network among different task-specific subnets. In this way, each task can select parameters in a shared subnet, according to its requirements. Experiments are conducted on three benchmark public datasets and one synthetic dataset; the impact of the different subnets’ sparsity and tasks’ correlations to the model performance is analyzed. Results show that the proposed model’s information sharing strategy is helpful to transfer learning and superior to the several comparison models. MDPI 2022-03-21 /pmc/articles/PMC8947268/ /pubmed/35327942 http://dx.doi.org/10.3390/e24030432 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Chen, Ying
Yu, Jiong
Zhao, Yutong
Chen, Jiaying
Du, Xusheng
Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning
title Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning
title_full Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning
title_fullStr Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning
title_full_unstemmed Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning
title_short Task’s Choice: Pruning-Based Feature Sharing (PBFS) for Multi-Task Learning
title_sort task’s choice: pruning-based feature sharing (pbfs) for multi-task learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8947268/
https://www.ncbi.nlm.nih.gov/pubmed/35327942
http://dx.doi.org/10.3390/e24030432
work_keys_str_mv AT chenying taskschoicepruningbasedfeaturesharingpbfsformultitasklearning
AT yujiong taskschoicepruningbasedfeaturesharingpbfsformultitasklearning
AT zhaoyutong taskschoicepruningbasedfeaturesharingpbfsformultitasklearning
AT chenjiaying taskschoicepruningbasedfeaturesharingpbfsformultitasklearning
AT duxusheng taskschoicepruningbasedfeaturesharingpbfsformultitasklearning