Cargando…
Adaptation of Autoencoder for Sparsity Reduction From Clinical Notes Representation Learning
When dealing with clinical text classification on a small dataset, recent studies have confirmed that a well-tuned multilayer perceptron outperforms other generative classifiers, including deep learning ones. To increase the performance of the neural network classifier, feature selection for the lea...
Formato: | Online Artículo Texto |
---|---|
Lenguaje: | English |
Publicado: |
IEEE
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10561736/ https://www.ncbi.nlm.nih.gov/pubmed/37817825 http://dx.doi.org/10.1109/JTEHM.2023.3241635 |
_version_ | 1785117984940883968 |
---|---|
collection | PubMed |
description | When dealing with clinical text classification on a small dataset, recent studies have confirmed that a well-tuned multilayer perceptron outperforms other generative classifiers, including deep learning ones. To increase the performance of the neural network classifier, feature selection for the learning representation can effectively be used. However, most feature selection methods only estimate the degree of linear dependency between variables and select the best features based on univariate statistical tests. Furthermore, the sparsity of the feature space involved in the learning representation is ignored. Goal: Our aim is, therefore, to access an alternative approach to tackle the sparsity by compressing the clinical representation feature space, where limited French clinical notes can also be dealt with effectively. Methods: This study proposed an autoencoder learning algorithm to take advantage of sparsity reduction in clinical note representation. The motivation was to determine how to compress sparse, high-dimensional data by reducing the dimension of the clinical note representation feature space. The classification performance of the classifiers was then evaluated in the trained and compressed feature space. Results: The proposed approach provided overall performance gains of up to 3% for each test set evaluation. Finally, the classifier achieved 92% accuracy, 91% recall, 91% precision, and 91% f1-score in detecting the patient’s condition. Furthermore, the compression working mechanism and the autoencoder prediction process were demonstrated by applying the theoretic information bottleneck framework. Clinical and Translational Impact Statement— An autoencoder learning algorithm effectively tackles the problem of sparsity in the representation feature space from a small clinical narrative dataset. Significantly, it can learn the best representation of the training data because of its lossless compression capacity compared to other approaches. Consequently, its downstream classification ability can be significantly improved, which cannot be done using deep learning models. |
format | Online Article Text |
id | pubmed-10561736 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | IEEE |
record_format | MEDLINE/PubMed |
spelling | pubmed-105617362023-10-10 Adaptation of Autoencoder for Sparsity Reduction From Clinical Notes Representation Learning IEEE J Transl Eng Health Med Article When dealing with clinical text classification on a small dataset, recent studies have confirmed that a well-tuned multilayer perceptron outperforms other generative classifiers, including deep learning ones. To increase the performance of the neural network classifier, feature selection for the learning representation can effectively be used. However, most feature selection methods only estimate the degree of linear dependency between variables and select the best features based on univariate statistical tests. Furthermore, the sparsity of the feature space involved in the learning representation is ignored. Goal: Our aim is, therefore, to access an alternative approach to tackle the sparsity by compressing the clinical representation feature space, where limited French clinical notes can also be dealt with effectively. Methods: This study proposed an autoencoder learning algorithm to take advantage of sparsity reduction in clinical note representation. The motivation was to determine how to compress sparse, high-dimensional data by reducing the dimension of the clinical note representation feature space. The classification performance of the classifiers was then evaluated in the trained and compressed feature space. Results: The proposed approach provided overall performance gains of up to 3% for each test set evaluation. Finally, the classifier achieved 92% accuracy, 91% recall, 91% precision, and 91% f1-score in detecting the patient’s condition. Furthermore, the compression working mechanism and the autoencoder prediction process were demonstrated by applying the theoretic information bottleneck framework. Clinical and Translational Impact Statement— An autoencoder learning algorithm effectively tackles the problem of sparsity in the representation feature space from a small clinical narrative dataset. Significantly, it can learn the best representation of the training data because of its lossless compression capacity compared to other approaches. Consequently, its downstream classification ability can be significantly improved, which cannot be done using deep learning models. IEEE 2023-02-02 /pmc/articles/PMC10561736/ /pubmed/37817825 http://dx.doi.org/10.1109/JTEHM.2023.3241635 Text en © 2023 The Authors https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ |
spellingShingle | Article Adaptation of Autoencoder for Sparsity Reduction From Clinical Notes Representation Learning |
title | Adaptation of Autoencoder for Sparsity Reduction From Clinical Notes Representation Learning |
title_full | Adaptation of Autoencoder for Sparsity Reduction From Clinical Notes Representation Learning |
title_fullStr | Adaptation of Autoencoder for Sparsity Reduction From Clinical Notes Representation Learning |
title_full_unstemmed | Adaptation of Autoencoder for Sparsity Reduction From Clinical Notes Representation Learning |
title_short | Adaptation of Autoencoder for Sparsity Reduction From Clinical Notes Representation Learning |
title_sort | adaptation of autoencoder for sparsity reduction from clinical notes representation learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10561736/ https://www.ncbi.nlm.nih.gov/pubmed/37817825 http://dx.doi.org/10.1109/JTEHM.2023.3241635 |
work_keys_str_mv | AT adaptationofautoencoderforsparsityreductionfromclinicalnotesrepresentationlearning AT adaptationofautoencoderforsparsityreductionfromclinicalnotesrepresentationlearning AT adaptationofautoencoderforsparsityreductionfromclinicalnotesrepresentationlearning AT adaptationofautoencoderforsparsityreductionfromclinicalnotesrepresentationlearning AT adaptationofautoencoderforsparsityreductionfromclinicalnotesrepresentationlearning |