Cargando…

Medical image translation using an edge-guided generative adversarial network with global-to-local feature fusion

In this paper, we propose a framework based deep learning for medical image translation using paired and unpaired training data. Initially, a deep neural network with an encoder-decoder structure is proposed for image-to-image translation using paired training data. A multi-scale context aggregation...

Descripción completa

Detalles Bibliográficos
Autores principales: Amini Amirkolaee, Hamed, Amini Amirkolaee, Hamid
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Editorial Department of Journal of Biomedical Research 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9724158/
https://www.ncbi.nlm.nih.gov/pubmed/35821004
http://dx.doi.org/10.7555/JBR.36.20220037
Descripción
Sumario:In this paper, we propose a framework based deep learning for medical image translation using paired and unpaired training data. Initially, a deep neural network with an encoder-decoder structure is proposed for image-to-image translation using paired training data. A multi-scale context aggregation approach is then used to extract various features from different levels of encoding, which are used during the corresponding network decoding stage. At this point, we further propose an edge-guided generative adversarial network for image-to-image translation based on unpaired training data. An edge constraint loss function is used to improve network performance in tissue boundaries. To analyze framework performance, we conducted five different medical image translation tasks. The assessment demonstrates that the proposed deep learning framework brings significant improvement beyond state-of-the-arts.