Cargando…

Improved Quantification of Myocardium Scar in Late Gadolinium Enhancement Images: Deep Learning Based Image Fusion Approach

BACKGROUND: Quantification of myocardium scarring in late gadolinium enhanced (LGE) cardiac magnetic resonance imaging can be challenging due to low scar‐to‐background contrast and low image quality. To resolve ambiguous LGE regions, experienced readers often use conventional cine sequences to accur...

Descripción completa

Detalles Bibliográficos
Autores principales: Fahmy, Ahmed S., Rowin, Ethan J., Chan, Raymond H., Manning, Warren J., Maron, Martin S., Nezafat, Reza
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley & Sons, Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8359184/
https://www.ncbi.nlm.nih.gov/pubmed/33599043
http://dx.doi.org/10.1002/jmri.27555
Descripción
Sumario:BACKGROUND: Quantification of myocardium scarring in late gadolinium enhanced (LGE) cardiac magnetic resonance imaging can be challenging due to low scar‐to‐background contrast and low image quality. To resolve ambiguous LGE regions, experienced readers often use conventional cine sequences to accurately identify the myocardium borders. PURPOSE: To develop a deep learning model for combining LGE and cine images to improve the robustness and accuracy of LGE scar quantification. STUDY TYPE: Retrospective. POPULATION: A total of 191 hypertrophic cardiomyopathy patients: 1) 162 patients from two sites randomly split into training (50%; 81 patients), validation (25%, 40 patients), and testing (25%; 41 patients); and 2) an external testing dataset (29 patients) from a third site. FIELD STRENGTH/SEQUENCE: 1.5T, inversion‐recovery segmented gradient‐echo LGE and balanced steady‐state free‐precession cine sequences ASSESSMENT: Two convolutional neural networks (CNN) were trained for myocardium and scar segmentation, one with and one without LGE‐Cine fusion. For CNN with fusion, the input was two aligned LGE and cine images at matched cardiac phase and anatomical location. For CNN without fusion, only LGE images were used as input. Manual segmentation of the datasets was used as reference standard. STATISTICAL TESTS: Manual and CNN‐based quantifications of LGE scar burden and of myocardial volume were assessed using Pearson linear correlation coefficients (r) and Bland–Altman analysis. RESULTS: Both CNN models showed strong agreement with manual quantification of LGE scar burden and myocardium volume. CNN with LGE‐Cine fusion was more robust than CNN without LGE‐Cine fusion, allowing for successful segmentation of significantly more slices (603 [95%] vs. 562 (89%) of 635 slices; P < 0.001). Also, CNN with LGE‐Cine fusion showed better agreement with manual quantification of LGE scar burden than CNN without LGE‐Cine fusion (%Scar(LGE‐cine) = 0.82 × %Scar(manual), r = 0.84 vs. %Scar(LGE) = 0.47 × %Scar(manual), r = 0.81) and myocardium volume (Volume(LGE‐cine) = 1.03 × Volume(manual), r = 0.96 vs. Volume(LGE) = 0.91 × Volume(manual), r = 0.91). DATA CONCLUSION: CNN based LGE‐Cine fusion can improve the robustness and accuracy of automated scar quantification. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY: 1