Cargando…

Deep-Learning Segmentation of Epicardial Adipose Tissue Using Four-Chamber Cardiac Magnetic Resonance Imaging

In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN)....

Descripción completa

Detalles Bibliográficos
Autores principales: Daudé, Pierre, Ancel, Patricia, Confort Gouny, Sylviane, Jacquier, Alexis, Kober, Frank, Dutour, Anne, Bernard, Monique, Gaborit, Bénédicte, Rapacchi, Stanislas
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8774679/
https://www.ncbi.nlm.nih.gov/pubmed/35054297
http://dx.doi.org/10.3390/diagnostics12010126
Descripción
Sumario:In magnetic resonance imaging (MRI), epicardial adipose tissue (EAT) overload remains often overlooked due to tedious manual contouring in images. Automated four-chamber EAT area quantification was proposed, leveraging deep-learning segmentation using multi-frame fully convolutional networks (FCN). The investigation involved 100 subjects—comprising healthy, obese, and diabetic patients—who underwent 3T cardiac cine MRI, optimized U-Net and FCN (noted FCNB) were trained on three consecutive cine frames for segmentation of central frame using dice loss. Networks were trained using 4-fold cross-validation (n = 80) and evaluated on an independent dataset (n = 20). Segmentation performances were compared to inter-intra observer bias with dice (DSC) and relative surface error (RSE). Both systole and diastole four-chamber area were correlated with total EAT volume (r = 0.77 and 0.74 respectively). Networks’ performances were equivalent to inter-observers’ bias (EAT: DSC(Inter) = 0.76, DSC(U-Net) = 0.77, DSC(FCNB) = 0.76). U-net outperformed (p < 0.0001) FCNB on all metrics. Eventually, proposed multi-frame U-Net provided automated EAT area quantification with a 14.2% precision for the clinically relevant upper three quarters of EAT area range, scaling patients’ risk of EAT overload with 70% accuracy. Exploiting multi-frame U-Net in standard cine provided automated EAT quantification over a wide range of EAT quantities. The method is made available to the community through a FSLeyes plugin.