Cargando…
Deep learning for anatomical interpretation of video bronchoscopy images
Anesthesiologists commonly use video bronchoscopy to facilitate intubation or confirm the location of the endotracheal tube; however, depth and orientation in the bronchial tree can often be confused because anesthesiologists cannot trace the airway from the oropharynx when it is performed using an...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8660867/ https://www.ncbi.nlm.nih.gov/pubmed/34887497 http://dx.doi.org/10.1038/s41598-021-03219-6 |
Sumario: | Anesthesiologists commonly use video bronchoscopy to facilitate intubation or confirm the location of the endotracheal tube; however, depth and orientation in the bronchial tree can often be confused because anesthesiologists cannot trace the airway from the oropharynx when it is performed using an endotracheal tube. Moreover, the decubitus position is often used in certain surgeries. Although it occurs rarely, the misinterpretation of tube location can cause accidental extubation or endobronchial intubation, which can lead to hyperinflation. Thus, video bronchoscopy with a decision supporting system using artificial intelligence would be useful in the anesthesiologic process. In this study, we aimed to develop an artificial intelligence model robust to rotation and covering using video bronchoscopy images. We collected video bronchoscopic images from an institutional database. Collected images were automatically labeled by an optical character recognition engine as the carina and left/right main bronchus. Except 180 images for the evaluation dataset, 80% were randomly allocated to the training dataset. The remaining images were assigned to the validation and test datasets in a 7:3 ratio. Random image rotation and circular cropping were applied. Ten kinds of pretrained models with < 25 million parameters were trained on the training and validation datasets. The model showing the best prediction accuracy for the test dataset was selected as the final model. Six human experts reviewed the evaluation dataset for the inference of anatomical locations to compare its performance with that of the final model. In the experiments, 8688 images were prepared and assigned to the evaluation (180), training (6806), validation (1191), and test (511) datasets. The EfficientNetB1 model showed the highest accuracy (0.86) and was selected as the final model. For the evaluation dataset, the final model showed better performance (accuracy, 0.84) than almost all human experts (0.38, 0.44, 0.51, 0.68, and 0.63), and only the most-experienced pulmonologist showed performance comparable (0.82) with that of the final model. The performance of human experts was generally proportional to their experiences. The performance difference between anesthesiologists and pulmonologists was marked in discrimination of the right main bronchus. Using bronchoscopic images, our model could distinguish anatomical locations among the carina and both main bronchi under random rotation and covering. The performance was comparable with that of the most-experienced human expert. This model can be a basis for designing a clinical decision support system with video bronchoscopy. |
---|