Cargando…

A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance

BACKGROUND: Discharge medical notes written by physicians contain important information about the health condition of patients. Many deep learning algorithms have been successfully applied to extract important information from unstructured medical notes data that can entail subsequent actionable res...

Descripción completa

Detalles Bibliográficos
Autores principales: Lu, Hongxia, Ehwerhemuepha, Louis, Rakovski, Cyril
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9250736/
https://www.ncbi.nlm.nih.gov/pubmed/35780100
http://dx.doi.org/10.1186/s12874-022-01665-y
_version_ 1784739865994199040
author Lu, Hongxia
Ehwerhemuepha, Louis
Rakovski, Cyril
author_facet Lu, Hongxia
Ehwerhemuepha, Louis
Rakovski, Cyril
author_sort Lu, Hongxia
collection PubMed
description BACKGROUND: Discharge medical notes written by physicians contain important information about the health condition of patients. Many deep learning algorithms have been successfully applied to extract important information from unstructured medical notes data that can entail subsequent actionable results in the medical domain. This study aims to explore the model performance of various deep learning algorithms in text classification tasks on medical notes with respect to different disease class imbalance scenarios. METHODS: In this study, we employed seven artificial intelligence models, a CNN (Convolutional Neural Network), a Transformer encoder, a pretrained BERT (Bidirectional Encoder Representations from Transformers), and four typical sequence neural networks models, namely, RNN (Recurrent Neural Network), GRU (Gated Recurrent Unit), LSTM (Long Short-Term Memory), and Bi-LSTM (Bi-directional Long Short-Term Memory) to classify the presence or absence of 16 disease conditions from patients’ discharge summary notes. We analyzed this question as a composition of 16 binary separate classification problems. The model performance of the seven models on each of the 16 datasets with various levels of imbalance between classes were compared in terms of AUC-ROC (Area Under the Curve of the Receiver Operating Characteristic), AUC-PR (Area Under the Curve of Precision and Recall), F1 Score, and Balanced Accuracy as well as the training time. The model performances were also compared in combination with different word embedding approaches (GloVe, BioWordVec, and no pre-trained word embeddings). RESULTS: The analyses of these 16 binary classification problems showed that the Transformer encoder model performs the best in nearly all scenarios. In addition, when the disease prevalence is close to or greater than 50%, the Convolutional Neural Network model achieved a comparable performance to the Transformer encoder, and its training time was 17.6% shorter than the second fastest model, 91.3% shorter than the Transformer encoder, and 94.7% shorter than the pre-trained BERT-Base model. The BioWordVec embeddings slightly improved the performance of the Bi-LSTM model in most disease prevalence scenarios, while the CNN model performed better without pre-trained word embeddings. In addition, the training time was significantly reduced with the GloVe embeddings for all models. CONCLUSIONS: For classification tasks on medical notes, Transformer encoders are the best choice if the computation resource is not an issue. Otherwise, when the classes are relatively balanced, CNNs are a leading candidate because of their competitive performance and computational efficiency. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-022-01665-y.
format Online
Article
Text
id pubmed-9250736
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-92507362022-07-04 A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance Lu, Hongxia Ehwerhemuepha, Louis Rakovski, Cyril BMC Med Res Methodol Research BACKGROUND: Discharge medical notes written by physicians contain important information about the health condition of patients. Many deep learning algorithms have been successfully applied to extract important information from unstructured medical notes data that can entail subsequent actionable results in the medical domain. This study aims to explore the model performance of various deep learning algorithms in text classification tasks on medical notes with respect to different disease class imbalance scenarios. METHODS: In this study, we employed seven artificial intelligence models, a CNN (Convolutional Neural Network), a Transformer encoder, a pretrained BERT (Bidirectional Encoder Representations from Transformers), and four typical sequence neural networks models, namely, RNN (Recurrent Neural Network), GRU (Gated Recurrent Unit), LSTM (Long Short-Term Memory), and Bi-LSTM (Bi-directional Long Short-Term Memory) to classify the presence or absence of 16 disease conditions from patients’ discharge summary notes. We analyzed this question as a composition of 16 binary separate classification problems. The model performance of the seven models on each of the 16 datasets with various levels of imbalance between classes were compared in terms of AUC-ROC (Area Under the Curve of the Receiver Operating Characteristic), AUC-PR (Area Under the Curve of Precision and Recall), F1 Score, and Balanced Accuracy as well as the training time. The model performances were also compared in combination with different word embedding approaches (GloVe, BioWordVec, and no pre-trained word embeddings). RESULTS: The analyses of these 16 binary classification problems showed that the Transformer encoder model performs the best in nearly all scenarios. In addition, when the disease prevalence is close to or greater than 50%, the Convolutional Neural Network model achieved a comparable performance to the Transformer encoder, and its training time was 17.6% shorter than the second fastest model, 91.3% shorter than the Transformer encoder, and 94.7% shorter than the pre-trained BERT-Base model. The BioWordVec embeddings slightly improved the performance of the Bi-LSTM model in most disease prevalence scenarios, while the CNN model performed better without pre-trained word embeddings. In addition, the training time was significantly reduced with the GloVe embeddings for all models. CONCLUSIONS: For classification tasks on medical notes, Transformer encoders are the best choice if the computation resource is not an issue. Otherwise, when the classes are relatively balanced, CNNs are a leading candidate because of their competitive performance and computational efficiency. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-022-01665-y. BioMed Central 2022-07-02 /pmc/articles/PMC9250736/ /pubmed/35780100 http://dx.doi.org/10.1186/s12874-022-01665-y Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Lu, Hongxia
Ehwerhemuepha, Louis
Rakovski, Cyril
A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance
title A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance
title_full A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance
title_fullStr A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance
title_full_unstemmed A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance
title_short A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance
title_sort comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9250736/
https://www.ncbi.nlm.nih.gov/pubmed/35780100
http://dx.doi.org/10.1186/s12874-022-01665-y
work_keys_str_mv AT luhongxia acomparativestudyondeeplearningmodelsfortextclassificationofunstructuredmedicalnoteswithvariouslevelsofclassimbalance
AT ehwerhemuephalouis acomparativestudyondeeplearningmodelsfortextclassificationofunstructuredmedicalnoteswithvariouslevelsofclassimbalance
AT rakovskicyril acomparativestudyondeeplearningmodelsfortextclassificationofunstructuredmedicalnoteswithvariouslevelsofclassimbalance
AT luhongxia comparativestudyondeeplearningmodelsfortextclassificationofunstructuredmedicalnoteswithvariouslevelsofclassimbalance
AT ehwerhemuephalouis comparativestudyondeeplearningmodelsfortextclassificationofunstructuredmedicalnoteswithvariouslevelsofclassimbalance
AT rakovskicyril comparativestudyondeeplearningmodelsfortextclassificationofunstructuredmedicalnoteswithvariouslevelsofclassimbalance