Cargando…
Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model
In order to compare magnetic resonance imaging (MRI) findings of patients with large vestibular aqueduct syndrome (LVAS) in the stable hearing loss (HL) group and the fluctuating HL group, this paper provides reference for clinicians' early intervention. From January 2001 to January 2016, patie...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9020928/ https://www.ncbi.nlm.nih.gov/pubmed/35463685 http://dx.doi.org/10.1155/2022/4814577 |
_version_ | 1784689676890669056 |
---|---|
author | Duan, Bo Xu, Zhengmin Pan, Lili Chen, Wenxia Qiao, Zhongwei |
author_facet | Duan, Bo Xu, Zhengmin Pan, Lili Chen, Wenxia Qiao, Zhongwei |
author_sort | Duan, Bo |
collection | PubMed |
description | In order to compare magnetic resonance imaging (MRI) findings of patients with large vestibular aqueduct syndrome (LVAS) in the stable hearing loss (HL) group and the fluctuating HL group, this paper provides reference for clinicians' early intervention. From January 2001 to January 2016, patients with hearing impairment diagnosed as LVAS in infancy in the Department of Otorhinolaryngology, Head and Neck Surgery, Children's Hospital of Fudan University were collected and divided into the stable HL group (n = 29) and the fluctuating HL group (n = 30). MRI images at initial diagnosis were collected, and various deep learning neural network training models were established based on PyTorch to classify and predict the two series. Vgg16_bn, vgg19_bn, and ResNet18, convolutional neural networks (CNNs) with fewer layers, had favorable effects for model building, with accs of 0.9, 0.8, and 0.85, respectively. ResNet50, a CNN with multiple layers and an acc of 0.54, had relatively poor effects. The GoogLeNet-trained model performed best, with an acc of 0.98. We conclude that deep learning-based radiomics can assist doctors in accurately predicting LVAS patients to classify them into either fluctuating or stable HL types and adopt differentiated treatment methods. |
format | Online Article Text |
id | pubmed-9020928 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-90209282022-04-21 Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model Duan, Bo Xu, Zhengmin Pan, Lili Chen, Wenxia Qiao, Zhongwei J Healthc Eng Research Article In order to compare magnetic resonance imaging (MRI) findings of patients with large vestibular aqueduct syndrome (LVAS) in the stable hearing loss (HL) group and the fluctuating HL group, this paper provides reference for clinicians' early intervention. From January 2001 to January 2016, patients with hearing impairment diagnosed as LVAS in infancy in the Department of Otorhinolaryngology, Head and Neck Surgery, Children's Hospital of Fudan University were collected and divided into the stable HL group (n = 29) and the fluctuating HL group (n = 30). MRI images at initial diagnosis were collected, and various deep learning neural network training models were established based on PyTorch to classify and predict the two series. Vgg16_bn, vgg19_bn, and ResNet18, convolutional neural networks (CNNs) with fewer layers, had favorable effects for model building, with accs of 0.9, 0.8, and 0.85, respectively. ResNet50, a CNN with multiple layers and an acc of 0.54, had relatively poor effects. The GoogLeNet-trained model performed best, with an acc of 0.98. We conclude that deep learning-based radiomics can assist doctors in accurately predicting LVAS patients to classify them into either fluctuating or stable HL types and adopt differentiated treatment methods. Hindawi 2022-04-13 /pmc/articles/PMC9020928/ /pubmed/35463685 http://dx.doi.org/10.1155/2022/4814577 Text en Copyright © 2022 Bo Duan et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Duan, Bo Xu, Zhengmin Pan, Lili Chen, Wenxia Qiao, Zhongwei Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model |
title | Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model |
title_full | Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model |
title_fullStr | Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model |
title_full_unstemmed | Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model |
title_short | Prediction of Hearing Prognosis of Large Vestibular Aqueduct Syndrome Based on the PyTorch Deep Learning Model |
title_sort | prediction of hearing prognosis of large vestibular aqueduct syndrome based on the pytorch deep learning model |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9020928/ https://www.ncbi.nlm.nih.gov/pubmed/35463685 http://dx.doi.org/10.1155/2022/4814577 |
work_keys_str_mv | AT duanbo predictionofhearingprognosisoflargevestibularaqueductsyndromebasedonthepytorchdeeplearningmodel AT xuzhengmin predictionofhearingprognosisoflargevestibularaqueductsyndromebasedonthepytorchdeeplearningmodel AT panlili predictionofhearingprognosisoflargevestibularaqueductsyndromebasedonthepytorchdeeplearningmodel AT chenwenxia predictionofhearingprognosisoflargevestibularaqueductsyndromebasedonthepytorchdeeplearningmodel AT qiaozhongwei predictionofhearingprognosisoflargevestibularaqueductsyndromebasedonthepytorchdeeplearningmodel |