Cargando…

Using Artificial Neural Network Condensation to Facilitate Adaptation of Machine Learning in Medical Settings by Reducing Computational Burden: Model Design and Evaluation Study

BACKGROUND: Machine learning applications in the health care domain can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant number of computational resources. Although this might not be a problem for the wide adoption of machine learning tool...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Dianbo, Zheng, Ming, Sepulveda, Nestor Andres
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8701705/
https://www.ncbi.nlm.nih.gov/pubmed/34889747
http://dx.doi.org/10.2196/20767
_version_ 1784621067501830144
author Liu, Dianbo
Zheng, Ming
Sepulveda, Nestor Andres
author_facet Liu, Dianbo
Zheng, Ming
Sepulveda, Nestor Andres
author_sort Liu, Dianbo
collection PubMed
description BACKGROUND: Machine learning applications in the health care domain can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant number of computational resources. Although this might not be a problem for the wide adoption of machine learning tools in high-income countries, the availability of computational resources can be limited in low-income countries and on mobile devices. This can limit many people from benefiting from the advancement in machine learning applications in the field of health care. OBJECTIVE: In this study, we explore three methods to increase the computational efficiency and reduce model sizes of either recurrent neural networks (RNNs) or feedforward deep neural networks (DNNs) without compromising their accuracy. METHODS: We used inpatient mortality prediction as our case analysis upon review of an intensive care unit dataset. We reduced the size of RNN and DNN by applying pruning of “unused” neurons. Additionally, we modified the RNN structure by adding a hidden layer to the RNN cell but reducing the total number of recurrent layers to accomplish a reduction of the total parameters used in the network. Finally, we implemented quantization on DNN by forcing the weights to be 8 bits instead of 32 bits. RESULTS: We found that all methods increased implementation efficiency, including training speed, memory size, and inference speed, without reducing the accuracy of mortality prediction. CONCLUSIONS: Our findings suggest that neural network condensation allows for the implementation of sophisticated neural network algorithms on devices with lower computational resources.
format Online
Article
Text
id pubmed-8701705
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-87017052022-01-10 Using Artificial Neural Network Condensation to Facilitate Adaptation of Machine Learning in Medical Settings by Reducing Computational Burden: Model Design and Evaluation Study Liu, Dianbo Zheng, Ming Sepulveda, Nestor Andres JMIR Form Res Original Paper BACKGROUND: Machine learning applications in the health care domain can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant number of computational resources. Although this might not be a problem for the wide adoption of machine learning tools in high-income countries, the availability of computational resources can be limited in low-income countries and on mobile devices. This can limit many people from benefiting from the advancement in machine learning applications in the field of health care. OBJECTIVE: In this study, we explore three methods to increase the computational efficiency and reduce model sizes of either recurrent neural networks (RNNs) or feedforward deep neural networks (DNNs) without compromising their accuracy. METHODS: We used inpatient mortality prediction as our case analysis upon review of an intensive care unit dataset. We reduced the size of RNN and DNN by applying pruning of “unused” neurons. Additionally, we modified the RNN structure by adding a hidden layer to the RNN cell but reducing the total number of recurrent layers to accomplish a reduction of the total parameters used in the network. Finally, we implemented quantization on DNN by forcing the weights to be 8 bits instead of 32 bits. RESULTS: We found that all methods increased implementation efficiency, including training speed, memory size, and inference speed, without reducing the accuracy of mortality prediction. CONCLUSIONS: Our findings suggest that neural network condensation allows for the implementation of sophisticated neural network algorithms on devices with lower computational resources. JMIR Publications 2021-12-08 /pmc/articles/PMC8701705/ /pubmed/34889747 http://dx.doi.org/10.2196/20767 Text en ©Dianbo Liu, Ming Zheng, Nestor Andres Sepulveda. Originally published in JMIR Formative Research (https://formative.jmir.org), 08.12.2021. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.
spellingShingle Original Paper
Liu, Dianbo
Zheng, Ming
Sepulveda, Nestor Andres
Using Artificial Neural Network Condensation to Facilitate Adaptation of Machine Learning in Medical Settings by Reducing Computational Burden: Model Design and Evaluation Study
title Using Artificial Neural Network Condensation to Facilitate Adaptation of Machine Learning in Medical Settings by Reducing Computational Burden: Model Design and Evaluation Study
title_full Using Artificial Neural Network Condensation to Facilitate Adaptation of Machine Learning in Medical Settings by Reducing Computational Burden: Model Design and Evaluation Study
title_fullStr Using Artificial Neural Network Condensation to Facilitate Adaptation of Machine Learning in Medical Settings by Reducing Computational Burden: Model Design and Evaluation Study
title_full_unstemmed Using Artificial Neural Network Condensation to Facilitate Adaptation of Machine Learning in Medical Settings by Reducing Computational Burden: Model Design and Evaluation Study
title_short Using Artificial Neural Network Condensation to Facilitate Adaptation of Machine Learning in Medical Settings by Reducing Computational Burden: Model Design and Evaluation Study
title_sort using artificial neural network condensation to facilitate adaptation of machine learning in medical settings by reducing computational burden: model design and evaluation study
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8701705/
https://www.ncbi.nlm.nih.gov/pubmed/34889747
http://dx.doi.org/10.2196/20767
work_keys_str_mv AT liudianbo usingartificialneuralnetworkcondensationtofacilitateadaptationofmachinelearninginmedicalsettingsbyreducingcomputationalburdenmodeldesignandevaluationstudy
AT zhengming usingartificialneuralnetworkcondensationtofacilitateadaptationofmachinelearninginmedicalsettingsbyreducingcomputationalburdenmodeldesignandevaluationstudy
AT sepulvedanestorandres usingartificialneuralnetworkcondensationtofacilitateadaptationofmachinelearninginmedicalsettingsbyreducingcomputationalburdenmodeldesignandevaluationstudy