Cargando…

Toward a stable and low-resource PLM-based medical diagnostic system via prompt tuning and MoE structure

Machine learning (ML) has been extensively involved in assistant disease diagnosis and prediction systems to emancipate the serious dependence on medical resources and improve healthcare quality. Moreover, with the booming of pre-training language models (PLMs), the application prospect and promotio...

Descripción completa

Detalles Bibliográficos
Autores principales: Dong, Bowen, Wang, Zhuo, Li, Zhenyu, Duan, Zhichao, Xu, Jiacheng, Pan, Tengyu, Zhang, Rui, Liu, Ning, Li, Xiuxing, Wang, Jie, Liu, Caiyan, Dong, Liling, Mao, Chenhui, Gao, Jing, Wang, Jianyong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10400680/
https://www.ncbi.nlm.nih.gov/pubmed/37537202
http://dx.doi.org/10.1038/s41598-023-39543-2
Descripción
Sumario:Machine learning (ML) has been extensively involved in assistant disease diagnosis and prediction systems to emancipate the serious dependence on medical resources and improve healthcare quality. Moreover, with the booming of pre-training language models (PLMs), the application prospect and promotion potential of machine learning methods in the relevant field have been further inspired. PLMs have recently achieved tremendous success in diverse text processing tasks, whereas limited by the significant semantic gap between the pre-training corpus and the structured electronic health records (EHRs), PLMs cannot converge to anticipated disease diagnosis and prediction results. Unfortunately, establishing connections between PLMs and EHRs typically requires the extraction of curated predictor variables from structured EHR resources, which is tedious and labor-intensive, and even discards vast implicit information.In this work, we propose an Input Prompting and Discriminative language model with the Mixture-of-experts framework (IPDM) by promoting the model’s capabilities to learn knowledge from heterogeneous information and facilitating the feature-aware ability of the model. Furthermore, leveraging the prompt-tuning mechanism, IPDM can inherit the impacts of the pre-training in downstream tasks exclusively through minor modifications. IPDM remarkably outperforms existing models, proved by experiments on one disease diagnosis task and two disease prediction tasks. Finally, experiments with few-feature and few-sample demonstrate that IPDM achieves significant stability and impressive performance in predicting chronic diseases with unclear early-onset characteristics or sudden diseases with insufficient data, which verifies the superiority of IPDM over existing mainstream methods, and reveals the IPDM can powerfully address the aforementioned challenges via establishing a stable and low-resource medical diagnostic system for various clinical scenarios.