Cargando…
A new generative adversarial network for medical images super resolution
For medical image analysis, there is always an immense need for rich details in an image. Typically, the diagnosis will be served best if the fine details in the image are retained and the image is available in high resolution. In medical imaging, acquiring high-resolution images is challenging and...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9184641/ https://www.ncbi.nlm.nih.gov/pubmed/35680968 http://dx.doi.org/10.1038/s41598-022-13658-4 |
_version_ | 1784724568497192960 |
---|---|
author | Ahmad, Waqar Ali, Hazrat Shah, Zubair Azmat, Shoaib |
author_facet | Ahmad, Waqar Ali, Hazrat Shah, Zubair Azmat, Shoaib |
author_sort | Ahmad, Waqar |
collection | PubMed |
description | For medical image analysis, there is always an immense need for rich details in an image. Typically, the diagnosis will be served best if the fine details in the image are retained and the image is available in high resolution. In medical imaging, acquiring high-resolution images is challenging and costly as it requires sophisticated and expensive instruments, trained human resources, and often causes operation delays. Deep learning based super resolution techniques can help us to extract rich details from a low-resolution image acquired using the existing devices. In this paper, we propose a new Generative Adversarial Network (GAN) based architecture for medical images, which maps low-resolution medical images to high-resolution images. The proposed architecture is divided into three steps. In the first step, we use a multi-path architecture to extract shallow features on multiple scales instead of single scale. In the second step, we use a ResNet34 architecture to extract deep features and upscale the features map by a factor of two. In the third step, we extract features of the upscaled version of the image using a residual connection-based mini-CNN and again upscale the feature map by a factor of two. The progressive upscaling overcomes the limitation for previous methods in generating true colors. Finally, we use a reconstruction convolutional layer to map back the upscaled features to a high-resolution image. Our addition of an extra loss term helps in overcoming large errors, thus, generating more realistic and smooth images. We evaluate the proposed architecture on four different medical image modalities: (1) the DRIVE and STARE datasets of retinal fundoscopy images, (2) the BraTS dataset of brain MRI, (3) the ISIC skin cancer dataset of dermoscopy images, and (4) the CAMUS dataset of cardiac ultrasound images. The proposed architecture achieves superior accuracy compared to other state-of-the-art super-resolution architectures. |
format | Online Article Text |
id | pubmed-9184641 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-91846412022-06-11 A new generative adversarial network for medical images super resolution Ahmad, Waqar Ali, Hazrat Shah, Zubair Azmat, Shoaib Sci Rep Article For medical image analysis, there is always an immense need for rich details in an image. Typically, the diagnosis will be served best if the fine details in the image are retained and the image is available in high resolution. In medical imaging, acquiring high-resolution images is challenging and costly as it requires sophisticated and expensive instruments, trained human resources, and often causes operation delays. Deep learning based super resolution techniques can help us to extract rich details from a low-resolution image acquired using the existing devices. In this paper, we propose a new Generative Adversarial Network (GAN) based architecture for medical images, which maps low-resolution medical images to high-resolution images. The proposed architecture is divided into three steps. In the first step, we use a multi-path architecture to extract shallow features on multiple scales instead of single scale. In the second step, we use a ResNet34 architecture to extract deep features and upscale the features map by a factor of two. In the third step, we extract features of the upscaled version of the image using a residual connection-based mini-CNN and again upscale the feature map by a factor of two. The progressive upscaling overcomes the limitation for previous methods in generating true colors. Finally, we use a reconstruction convolutional layer to map back the upscaled features to a high-resolution image. Our addition of an extra loss term helps in overcoming large errors, thus, generating more realistic and smooth images. We evaluate the proposed architecture on four different medical image modalities: (1) the DRIVE and STARE datasets of retinal fundoscopy images, (2) the BraTS dataset of brain MRI, (3) the ISIC skin cancer dataset of dermoscopy images, and (4) the CAMUS dataset of cardiac ultrasound images. The proposed architecture achieves superior accuracy compared to other state-of-the-art super-resolution architectures. Nature Publishing Group UK 2022-06-09 /pmc/articles/PMC9184641/ /pubmed/35680968 http://dx.doi.org/10.1038/s41598-022-13658-4 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Ahmad, Waqar Ali, Hazrat Shah, Zubair Azmat, Shoaib A new generative adversarial network for medical images super resolution |
title | A new generative adversarial network for medical images super resolution |
title_full | A new generative adversarial network for medical images super resolution |
title_fullStr | A new generative adversarial network for medical images super resolution |
title_full_unstemmed | A new generative adversarial network for medical images super resolution |
title_short | A new generative adversarial network for medical images super resolution |
title_sort | new generative adversarial network for medical images super resolution |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9184641/ https://www.ncbi.nlm.nih.gov/pubmed/35680968 http://dx.doi.org/10.1038/s41598-022-13658-4 |
work_keys_str_mv | AT ahmadwaqar anewgenerativeadversarialnetworkformedicalimagessuperresolution AT alihazrat anewgenerativeadversarialnetworkformedicalimagessuperresolution AT shahzubair anewgenerativeadversarialnetworkformedicalimagessuperresolution AT azmatshoaib anewgenerativeadversarialnetworkformedicalimagessuperresolution AT ahmadwaqar newgenerativeadversarialnetworkformedicalimagessuperresolution AT alihazrat newgenerativeadversarialnetworkformedicalimagessuperresolution AT shahzubair newgenerativeadversarialnetworkformedicalimagessuperresolution AT azmatshoaib newgenerativeadversarialnetworkformedicalimagessuperresolution |