Cargando…
Deep learning for anatomical interpretation of video bronchoscopy images
Anesthesiologists commonly use video bronchoscopy to facilitate intubation or confirm the location of the endotracheal tube; however, depth and orientation in the bronchial tree can often be confused because anesthesiologists cannot trace the airway from the oropharynx when it is performed using an...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8660867/ https://www.ncbi.nlm.nih.gov/pubmed/34887497 http://dx.doi.org/10.1038/s41598-021-03219-6 |
_version_ | 1784613283289890816 |
---|---|
author | Yoo, Ji Young Kang, Se Yoon Park, Jong Sun Cho, Young-Jae Park, Sung Yong Yoon, Ho Il Park, Sang Jun Jeong, Han-Gil Kim, Tackeun |
author_facet | Yoo, Ji Young Kang, Se Yoon Park, Jong Sun Cho, Young-Jae Park, Sung Yong Yoon, Ho Il Park, Sang Jun Jeong, Han-Gil Kim, Tackeun |
author_sort | Yoo, Ji Young |
collection | PubMed |
description | Anesthesiologists commonly use video bronchoscopy to facilitate intubation or confirm the location of the endotracheal tube; however, depth and orientation in the bronchial tree can often be confused because anesthesiologists cannot trace the airway from the oropharynx when it is performed using an endotracheal tube. Moreover, the decubitus position is often used in certain surgeries. Although it occurs rarely, the misinterpretation of tube location can cause accidental extubation or endobronchial intubation, which can lead to hyperinflation. Thus, video bronchoscopy with a decision supporting system using artificial intelligence would be useful in the anesthesiologic process. In this study, we aimed to develop an artificial intelligence model robust to rotation and covering using video bronchoscopy images. We collected video bronchoscopic images from an institutional database. Collected images were automatically labeled by an optical character recognition engine as the carina and left/right main bronchus. Except 180 images for the evaluation dataset, 80% were randomly allocated to the training dataset. The remaining images were assigned to the validation and test datasets in a 7:3 ratio. Random image rotation and circular cropping were applied. Ten kinds of pretrained models with < 25 million parameters were trained on the training and validation datasets. The model showing the best prediction accuracy for the test dataset was selected as the final model. Six human experts reviewed the evaluation dataset for the inference of anatomical locations to compare its performance with that of the final model. In the experiments, 8688 images were prepared and assigned to the evaluation (180), training (6806), validation (1191), and test (511) datasets. The EfficientNetB1 model showed the highest accuracy (0.86) and was selected as the final model. For the evaluation dataset, the final model showed better performance (accuracy, 0.84) than almost all human experts (0.38, 0.44, 0.51, 0.68, and 0.63), and only the most-experienced pulmonologist showed performance comparable (0.82) with that of the final model. The performance of human experts was generally proportional to their experiences. The performance difference between anesthesiologists and pulmonologists was marked in discrimination of the right main bronchus. Using bronchoscopic images, our model could distinguish anatomical locations among the carina and both main bronchi under random rotation and covering. The performance was comparable with that of the most-experienced human expert. This model can be a basis for designing a clinical decision support system with video bronchoscopy. |
format | Online Article Text |
id | pubmed-8660867 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-86608672021-12-13 Deep learning for anatomical interpretation of video bronchoscopy images Yoo, Ji Young Kang, Se Yoon Park, Jong Sun Cho, Young-Jae Park, Sung Yong Yoon, Ho Il Park, Sang Jun Jeong, Han-Gil Kim, Tackeun Sci Rep Article Anesthesiologists commonly use video bronchoscopy to facilitate intubation or confirm the location of the endotracheal tube; however, depth and orientation in the bronchial tree can often be confused because anesthesiologists cannot trace the airway from the oropharynx when it is performed using an endotracheal tube. Moreover, the decubitus position is often used in certain surgeries. Although it occurs rarely, the misinterpretation of tube location can cause accidental extubation or endobronchial intubation, which can lead to hyperinflation. Thus, video bronchoscopy with a decision supporting system using artificial intelligence would be useful in the anesthesiologic process. In this study, we aimed to develop an artificial intelligence model robust to rotation and covering using video bronchoscopy images. We collected video bronchoscopic images from an institutional database. Collected images were automatically labeled by an optical character recognition engine as the carina and left/right main bronchus. Except 180 images for the evaluation dataset, 80% were randomly allocated to the training dataset. The remaining images were assigned to the validation and test datasets in a 7:3 ratio. Random image rotation and circular cropping were applied. Ten kinds of pretrained models with < 25 million parameters were trained on the training and validation datasets. The model showing the best prediction accuracy for the test dataset was selected as the final model. Six human experts reviewed the evaluation dataset for the inference of anatomical locations to compare its performance with that of the final model. In the experiments, 8688 images were prepared and assigned to the evaluation (180), training (6806), validation (1191), and test (511) datasets. The EfficientNetB1 model showed the highest accuracy (0.86) and was selected as the final model. For the evaluation dataset, the final model showed better performance (accuracy, 0.84) than almost all human experts (0.38, 0.44, 0.51, 0.68, and 0.63), and only the most-experienced pulmonologist showed performance comparable (0.82) with that of the final model. The performance of human experts was generally proportional to their experiences. The performance difference between anesthesiologists and pulmonologists was marked in discrimination of the right main bronchus. Using bronchoscopic images, our model could distinguish anatomical locations among the carina and both main bronchi under random rotation and covering. The performance was comparable with that of the most-experienced human expert. This model can be a basis for designing a clinical decision support system with video bronchoscopy. Nature Publishing Group UK 2021-12-09 /pmc/articles/PMC8660867/ /pubmed/34887497 http://dx.doi.org/10.1038/s41598-021-03219-6 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Yoo, Ji Young Kang, Se Yoon Park, Jong Sun Cho, Young-Jae Park, Sung Yong Yoon, Ho Il Park, Sang Jun Jeong, Han-Gil Kim, Tackeun Deep learning for anatomical interpretation of video bronchoscopy images |
title | Deep learning for anatomical interpretation of video bronchoscopy images |
title_full | Deep learning for anatomical interpretation of video bronchoscopy images |
title_fullStr | Deep learning for anatomical interpretation of video bronchoscopy images |
title_full_unstemmed | Deep learning for anatomical interpretation of video bronchoscopy images |
title_short | Deep learning for anatomical interpretation of video bronchoscopy images |
title_sort | deep learning for anatomical interpretation of video bronchoscopy images |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8660867/ https://www.ncbi.nlm.nih.gov/pubmed/34887497 http://dx.doi.org/10.1038/s41598-021-03219-6 |
work_keys_str_mv | AT yoojiyoung deeplearningforanatomicalinterpretationofvideobronchoscopyimages AT kangseyoon deeplearningforanatomicalinterpretationofvideobronchoscopyimages AT parkjongsun deeplearningforanatomicalinterpretationofvideobronchoscopyimages AT choyoungjae deeplearningforanatomicalinterpretationofvideobronchoscopyimages AT parksungyong deeplearningforanatomicalinterpretationofvideobronchoscopyimages AT yoonhoil deeplearningforanatomicalinterpretationofvideobronchoscopyimages AT parksangjun deeplearningforanatomicalinterpretationofvideobronchoscopyimages AT jeonghangil deeplearningforanatomicalinterpretationofvideobronchoscopyimages AT kimtackeun deeplearningforanatomicalinterpretationofvideobronchoscopyimages |