Cargando…
Distinguishing bronchoscopically observed anatomical positions of airway under by convolutional neural network
BACKGROUND: Artificial intelligence (AI) technology has been used for finding lesions via gastrointestinal endoscopy. However, there were few AI-associated studies that discuss bronchoscopy. OBJECTIVES: To use convolutional neural network (CNN) to recognize the observed anatomical positions of the a...
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10457519/ https://www.ncbi.nlm.nih.gov/pubmed/37637372 http://dx.doi.org/10.1177/20406223231181495 |
Sumario: | BACKGROUND: Artificial intelligence (AI) technology has been used for finding lesions via gastrointestinal endoscopy. However, there were few AI-associated studies that discuss bronchoscopy. OBJECTIVES: To use convolutional neural network (CNN) to recognize the observed anatomical positions of the airway under bronchoscopy. DESIGN: We designed the study by comparing the imaging data of patients undergoing bronchoscopy from March 2022 to October 2022 by using EfficientNet (one of the CNNs) and U-Net. METHODS: Based on the inclusion and exclusion criteria, 1527 clear images of normal anatomical positions of the airways from 200 patients were used for training, and 475 clear images from 72 patients were utilized for validation. Further, 20 bronchoscopic videos of examination procedures in another 20 patients with normal airway structures were used to extract the bronchoscopic images of normal anatomical positions to evaluate the accuracy for the model. Finally, 21 respiratory doctors were enrolled for the test of recognizing corrected anatomical positions using the validating datasets. RESULTS: In all, 1527 bronchoscopic images of 200 patients with nine anatomical positions of the airway, including carina, right main bronchus, right upper lobe bronchus, right intermediate bronchus, right middle lobe bronchus, right lower lobe bronchus, left main bronchus, left upper lobe bronchus, and left lower lobe bronchus, were used for supervised machine learning and training, and 475 clear bronchoscopic images of 72 patients were used for validation. The mean accuracy of recognizing these 9 positions was 91% (carina: 98%, right main bronchus: 98%, right intermediate bronchus: 90%, right upper lobe bronchus: 91%, right middle lobe bronchus 92%, right lower lobe bronchus: 83%, left main bronchus: 89%, left upper bronchus: 91%, left lower bronchus: 76%). The area under the curves for these nine positions were >0.98. In addition, the accuracy of extracting the images via the video by the trained model was 94.7%. We also conducted a deep learning study to segment 10 segment bronchi in right lung, and 8 segment bronchi in Left lung. Because of the problem of radial depth, only segment bronchi distributions below right upper bronchus and right middle bronchus could be correctly recognized. The accuracy of recognizing was 84.33 ± 7.52% by doctors receiving interventional pulmonology education in our hospital over 6 months. CONCLUSION: Our study proved that AI technology can be used to distinguish the normal anatomical positions of the airway, and the model we trained could extract the corrected images via the video to help standardize data collection and control quality. |
---|