Cargando…

Automatic view classification of contrast and non-contrast echocardiography

BACKGROUND: Contrast and non-contrast echocardiography are crucial for cardiovascular diagnoses and treatments. Correct view classification is a foundational step for the analysis of cardiac structure and function. View classification from all sequences of a patient is laborious and depends heavily...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhu, Ye, Ma, Junqiang, Zhang, Zisang, Zhang, Yiwei, Zhu, Shuangshuang, Liu, Manwei, Zhang, Ziming, Wu, Chun, Yang, Xin, Cheng, Jun, Ni, Dong, Xie, Mingxing, Xue, Wufeng, Zhang, Li
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9515903/
https://www.ncbi.nlm.nih.gov/pubmed/36186996
http://dx.doi.org/10.3389/fcvm.2022.989091
Descripción
Sumario:BACKGROUND: Contrast and non-contrast echocardiography are crucial for cardiovascular diagnoses and treatments. Correct view classification is a foundational step for the analysis of cardiac structure and function. View classification from all sequences of a patient is laborious and depends heavily on the sonographer’s experience. In addition, the intra-view variability and the inter-view similarity increase the difficulty in identifying critical views in contrast and non-contrast echocardiography. This study aims to develop a deep residual convolutional neural network (CNN) to automatically identify multiple views of contrast and non-contrast echocardiography, including parasternal left ventricular short axis, apical two, three, and four-chamber views. METHODS: The study retrospectively analyzed a cohort of 855 patients who had undergone left ventricular opacification at the Department of Ultrasound Medicine, Wuhan Union Medical College Hospital from 2013 to 2021, including 70.3% men and 29.7% women aged from 41 to 62 (median age, 53). All datasets were preprocessed to remove sensitive information and 10 frames with equivalent intervals were sampled from each of the original videos. The number of frames in the training, validation, and test datasets were, respectively, 19,370, 2,370, and 2,620 from 9 views, corresponding to 688, 84, and 83 patients. We presented the CNN model to classify echocardiographic views with an initial learning rate of 0.001, and a batch size of 4 for 30 epochs. The learning rate was decayed by a factor of 0.9 per epoch. RESULTS: On the test dataset, the overall classification accuracy is 99.1 and 99.5% for contrast and non-contrast echocardiographic views. The average precision, recall, specificity, and F1 score are 96.9, 96.9, 100, and 96.9% for the 9 echocardiographic views. CONCLUSIONS: This study highlights the potential of CNN in the view classification of echocardiograms with and without contrast. It shows promise in improving the workflow of clinical analysis of echocardiography.