Cargando…

JLAN: medical code prediction via joint learning attention networks and denoising mechanism

BACKGROUND: Clinical notes are documents that contain detailed information about the health status of patients. Medical codes generally accompany them. However, the manual diagnosis is costly and error-prone. Moreover, large datasets in clinical diagnosis are susceptible to noise labels because of e...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Xingwang, Zhang, Yijia, Islam, Faiz ul, Dong, Deshi, Wei, Hao, Lu, Mingyu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8667397/
https://www.ncbi.nlm.nih.gov/pubmed/34903164
http://dx.doi.org/10.1186/s12859-021-04520-x
_version_ 1784614380557565952
author Li, Xingwang
Zhang, Yijia
Islam, Faiz ul
Dong, Deshi
Wei, Hao
Lu, Mingyu
author_facet Li, Xingwang
Zhang, Yijia
Islam, Faiz ul
Dong, Deshi
Wei, Hao
Lu, Mingyu
author_sort Li, Xingwang
collection PubMed
description BACKGROUND: Clinical notes are documents that contain detailed information about the health status of patients. Medical codes generally accompany them. However, the manual diagnosis is costly and error-prone. Moreover, large datasets in clinical diagnosis are susceptible to noise labels because of erroneous manual annotation. Therefore, machine learning has been utilized to perform automatic diagnoses. Previous state-of-the-art (SOTA) models used convolutional neural networks to build document representations for predicting medical codes. However, the clinical notes are usually long-tailed. Moreover, most models fail to deal with the noise during code allocation. Therefore, denoising mechanism and long-tailed classification are the keys to automated coding at scale. RESULTS: In this paper, a new joint learning model is proposed to extend our attention model for predicting medical codes from clinical notes. On the MIMIC-III-50 dataset, our model outperforms all the baselines and SOTA models in all quantitative metrics. On the MIMIC-III-full dataset, our model outperforms in the macro-F1, micro-F1, macro-AUC, and precision at eight compared to the most advanced models. In addition, after introducing the denoising mechanism, the convergence speed of the model becomes faster, and the loss of the model is reduced overall. CONCLUSIONS: The innovations of our model are threefold: firstly, the code-specific representation can be identified by adopted the self-attention mechanism and the label attention mechanism. Secondly, the performance of the long-tailed distributions can be boosted by introducing the joint learning mechanism. Thirdly, the denoising mechanism is suitable for reducing the noise effects in medical code prediction. Finally, we evaluate the effectiveness of our model on the widely-used MIMIC-III datasets and achieve new SOTA results.
format Online
Article
Text
id pubmed-8667397
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-86673972021-12-13 JLAN: medical code prediction via joint learning attention networks and denoising mechanism Li, Xingwang Zhang, Yijia Islam, Faiz ul Dong, Deshi Wei, Hao Lu, Mingyu BMC Bioinformatics Research BACKGROUND: Clinical notes are documents that contain detailed information about the health status of patients. Medical codes generally accompany them. However, the manual diagnosis is costly and error-prone. Moreover, large datasets in clinical diagnosis are susceptible to noise labels because of erroneous manual annotation. Therefore, machine learning has been utilized to perform automatic diagnoses. Previous state-of-the-art (SOTA) models used convolutional neural networks to build document representations for predicting medical codes. However, the clinical notes are usually long-tailed. Moreover, most models fail to deal with the noise during code allocation. Therefore, denoising mechanism and long-tailed classification are the keys to automated coding at scale. RESULTS: In this paper, a new joint learning model is proposed to extend our attention model for predicting medical codes from clinical notes. On the MIMIC-III-50 dataset, our model outperforms all the baselines and SOTA models in all quantitative metrics. On the MIMIC-III-full dataset, our model outperforms in the macro-F1, micro-F1, macro-AUC, and precision at eight compared to the most advanced models. In addition, after introducing the denoising mechanism, the convergence speed of the model becomes faster, and the loss of the model is reduced overall. CONCLUSIONS: The innovations of our model are threefold: firstly, the code-specific representation can be identified by adopted the self-attention mechanism and the label attention mechanism. Secondly, the performance of the long-tailed distributions can be boosted by introducing the joint learning mechanism. Thirdly, the denoising mechanism is suitable for reducing the noise effects in medical code prediction. Finally, we evaluate the effectiveness of our model on the widely-used MIMIC-III datasets and achieve new SOTA results. BioMed Central 2021-12-13 /pmc/articles/PMC8667397/ /pubmed/34903164 http://dx.doi.org/10.1186/s12859-021-04520-x Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Li, Xingwang
Zhang, Yijia
Islam, Faiz ul
Dong, Deshi
Wei, Hao
Lu, Mingyu
JLAN: medical code prediction via joint learning attention networks and denoising mechanism
title JLAN: medical code prediction via joint learning attention networks and denoising mechanism
title_full JLAN: medical code prediction via joint learning attention networks and denoising mechanism
title_fullStr JLAN: medical code prediction via joint learning attention networks and denoising mechanism
title_full_unstemmed JLAN: medical code prediction via joint learning attention networks and denoising mechanism
title_short JLAN: medical code prediction via joint learning attention networks and denoising mechanism
title_sort jlan: medical code prediction via joint learning attention networks and denoising mechanism
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8667397/
https://www.ncbi.nlm.nih.gov/pubmed/34903164
http://dx.doi.org/10.1186/s12859-021-04520-x
work_keys_str_mv AT lixingwang jlanmedicalcodepredictionviajointlearningattentionnetworksanddenoisingmechanism
AT zhangyijia jlanmedicalcodepredictionviajointlearningattentionnetworksanddenoisingmechanism
AT islamfaizul jlanmedicalcodepredictionviajointlearningattentionnetworksanddenoisingmechanism
AT dongdeshi jlanmedicalcodepredictionviajointlearningattentionnetworksanddenoisingmechanism
AT weihao jlanmedicalcodepredictionviajointlearningattentionnetworksanddenoisingmechanism
AT lumingyu jlanmedicalcodepredictionviajointlearningattentionnetworksanddenoisingmechanism