Cargando…

Enhancing magnetic resonance imaging-driven Alzheimer’s disease classification performance using generative adversarial learning

BACKGROUND: Generative adversarial networks (GAN) can produce images of improved quality but their ability to augment image-based classification is not fully explored. We evaluated if a modified GAN can learn from magnetic resonance imaging (MRI) scans of multiple magnetic field strengths to enhance...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Xiao, Qiu, Shangran, Joshi, Prajakta S., Xue, Chonghua, Killiany, Ronald J., Mian, Asim Z., Chin, Sang P., Au, Rhoda, Kolachalama, Vijaya B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7958452/
https://www.ncbi.nlm.nih.gov/pubmed/33715635
http://dx.doi.org/10.1186/s13195-021-00797-5
Descripción
Sumario:BACKGROUND: Generative adversarial networks (GAN) can produce images of improved quality but their ability to augment image-based classification is not fully explored. We evaluated if a modified GAN can learn from magnetic resonance imaging (MRI) scans of multiple magnetic field strengths to enhance Alzheimer’s disease (AD) classification performance. METHODS: T1-weighted brain MRI scans from 151 participants of the Alzheimer’s Disease Neuroimaging Initiative (ADNI), who underwent both 1.5-Tesla (1.5-T) and 3-Tesla imaging at the same time were selected to construct a GAN model. This model was trained along with a three-dimensional fully convolutional network (FCN) using the generated images (3T*) as inputs to predict AD status. Quality of the generated images was evaluated using signal to noise ratio (SNR), Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) and Natural Image Quality Evaluator (NIQE). Cases from the Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL, n = 107) and the National Alzheimer’s Coordinating Center (NACC, n = 565) were used for model validation. RESULTS: The 3T*-based FCN classifier performed better than the FCN model trained using the 1.5-T scans. Specifically, the mean area under curve increased from 0.907 to 0.932, from 0.934 to 0.940, and from 0.870 to 0.907 on the ADNI test, AIBL, and NACC datasets, respectively. Additionally, we found that the mean quality of the generated (3T*) images was consistently higher than the 1.5-T images, as measured using SNR, BRISQUE, and NIQE on the validation datasets. CONCLUSION: This study demonstrates a proof of principle that GAN frameworks can be constructed to augment AD classification performance and improve image quality. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13195-021-00797-5.