Cargando…

Modified GAN Augmentation Algorithms for the MRI-Classification of Myocardial Scar Tissue in Ischemic Cardiomyopathy

Contrast-enhanced cardiac magnetic resonance imaging (MRI) is routinely used to determine myocardial scar burden and make therapeutic decisions for coronary revascularization. Currently, there are no optimized deep-learning algorithms for the automated classification of scarred vs. normal myocardium...

Descripción completa

Detalles Bibliográficos
Autores principales: Sharma, Umesh C., Zhao, Kanhao, Mentkowski, Kyle, Sonkawade, Swati D., Karthikeyan, Badri, Lang, Jennifer K., Ying, Leslie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8473636/
https://www.ncbi.nlm.nih.gov/pubmed/34589528
http://dx.doi.org/10.3389/fcvm.2021.726943
Descripción
Sumario:Contrast-enhanced cardiac magnetic resonance imaging (MRI) is routinely used to determine myocardial scar burden and make therapeutic decisions for coronary revascularization. Currently, there are no optimized deep-learning algorithms for the automated classification of scarred vs. normal myocardium. We report a modified Generative Adversarial Network (GAN) augmentation method to improve the binary classification of myocardial scar using both pre-clinical and clinical approaches. For the initial training of the MobileNetV2 platform, we used the images generated from a high-field (9.4T) cardiac MRI of a mouse model of acute myocardial infarction (MI). Once the system showed 100% accuracy for the classification of acute MI in mice, we tested the translational significance of this approach in 91 patients with an ischemic myocardial scar, and 31 control subjects without evidence of myocardial scarring. To obtain a comparable augmentation dataset, we rotated scar images 8-times and control images 72-times, generating a total of 6,684 scar images and 7,451 control images. In humans, the use of Progressive Growing GAN (PGGAN)-based augmentation showed 93% classification accuracy, which is far superior to conventional automated modules. The use of other attention modules in our CNN further improved the classification accuracy by up to 5%. These data are of high translational significance and warrant larger multicenter studies in the future to validate the clinical implications.