Cargando…

Deep convolutional neural network for reduction of contrast-enhanced region on CT images

This study aims to produce non-contrast computed tomography (CT) images using a deep convolutional neural network (CNN) for imaging. Twenty-nine patients were selected. CT images were acquired without and with a contrast enhancement medium. The transverse images were divided into 64 × 64 pixels. Thi...

Descripción completa

Detalles Bibliográficos
Autores principales: Sumida, Iori, Magome, Taiki, Kitamori, Hideki, Das, Indra J, Yamaguchi, Hajime, Kizaki, Hisao, Aboshi, Keiko, Yamashita, Kyohei, Yamada, Yuji, Seo, Yuji, Isohashi, Fumiaki, Ogawa, Kazuhiko
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6805976/
https://www.ncbi.nlm.nih.gov/pubmed/31125068
http://dx.doi.org/10.1093/jrr/rrz030
_version_ 1783461520875716608
author Sumida, Iori
Magome, Taiki
Kitamori, Hideki
Das, Indra J
Yamaguchi, Hajime
Kizaki, Hisao
Aboshi, Keiko
Yamashita, Kyohei
Yamada, Yuji
Seo, Yuji
Isohashi, Fumiaki
Ogawa, Kazuhiko
author_facet Sumida, Iori
Magome, Taiki
Kitamori, Hideki
Das, Indra J
Yamaguchi, Hajime
Kizaki, Hisao
Aboshi, Keiko
Yamashita, Kyohei
Yamada, Yuji
Seo, Yuji
Isohashi, Fumiaki
Ogawa, Kazuhiko
author_sort Sumida, Iori
collection PubMed
description This study aims to produce non-contrast computed tomography (CT) images using a deep convolutional neural network (CNN) for imaging. Twenty-nine patients were selected. CT images were acquired without and with a contrast enhancement medium. The transverse images were divided into 64 × 64 pixels. This resulted in 14 723 patches in total for both non-contrast and contrast-enhanced CT image pairs. The proposed CNN model comprises five two-dimensional (2D) convolution layers with one shortcut path. For comparison, the U-net model, which comprises five 2D convolution layers interleaved with pooling and unpooling layers, was used. Training was performed in 24 patients and, for testing of trained models, another 5 patients were used. For quantitative evaluation, 50 regions of interest (ROIs) were selected on the reference contrast-enhanced image of the test data, and the mean pixel value of the ROIs was calculated. The mean pixel values of the ROIs at the same location on the reference non-contrast image and the predicted non-contrast image were calculated and those values were compared. Regarding the quantitative analysis, the difference in mean pixel value between the reference contrast-enhanced image and the predicted non-contrast image was significant (P < 0.0001) for both models. Significant differences in pixels (P < 0.0001) were found using the U-net model; in contrast, there was no significant difference using the proposed CNN model when comparing the reference non-contrast images and the predicted non-contrast images. Using the proposed CNN model, the contrast-enhanced region was satisfactorily reduced.
format Online
Article
Text
id pubmed-6805976
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-68059762019-10-28 Deep convolutional neural network for reduction of contrast-enhanced region on CT images Sumida, Iori Magome, Taiki Kitamori, Hideki Das, Indra J Yamaguchi, Hajime Kizaki, Hisao Aboshi, Keiko Yamashita, Kyohei Yamada, Yuji Seo, Yuji Isohashi, Fumiaki Ogawa, Kazuhiko J Radiat Res Regular Papers This study aims to produce non-contrast computed tomography (CT) images using a deep convolutional neural network (CNN) for imaging. Twenty-nine patients were selected. CT images were acquired without and with a contrast enhancement medium. The transverse images were divided into 64 × 64 pixels. This resulted in 14 723 patches in total for both non-contrast and contrast-enhanced CT image pairs. The proposed CNN model comprises five two-dimensional (2D) convolution layers with one shortcut path. For comparison, the U-net model, which comprises five 2D convolution layers interleaved with pooling and unpooling layers, was used. Training was performed in 24 patients and, for testing of trained models, another 5 patients were used. For quantitative evaluation, 50 regions of interest (ROIs) were selected on the reference contrast-enhanced image of the test data, and the mean pixel value of the ROIs was calculated. The mean pixel values of the ROIs at the same location on the reference non-contrast image and the predicted non-contrast image were calculated and those values were compared. Regarding the quantitative analysis, the difference in mean pixel value between the reference contrast-enhanced image and the predicted non-contrast image was significant (P < 0.0001) for both models. Significant differences in pixels (P < 0.0001) were found using the U-net model; in contrast, there was no significant difference using the proposed CNN model when comparing the reference non-contrast images and the predicted non-contrast images. Using the proposed CNN model, the contrast-enhanced region was satisfactorily reduced. Oxford University Press 2019-10 2019-05-24 /pmc/articles/PMC6805976/ /pubmed/31125068 http://dx.doi.org/10.1093/jrr/rrz030 Text en © The Author(s) 2019. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology. http://creativecommons.org/licenses/by/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Regular Papers
Sumida, Iori
Magome, Taiki
Kitamori, Hideki
Das, Indra J
Yamaguchi, Hajime
Kizaki, Hisao
Aboshi, Keiko
Yamashita, Kyohei
Yamada, Yuji
Seo, Yuji
Isohashi, Fumiaki
Ogawa, Kazuhiko
Deep convolutional neural network for reduction of contrast-enhanced region on CT images
title Deep convolutional neural network for reduction of contrast-enhanced region on CT images
title_full Deep convolutional neural network for reduction of contrast-enhanced region on CT images
title_fullStr Deep convolutional neural network for reduction of contrast-enhanced region on CT images
title_full_unstemmed Deep convolutional neural network for reduction of contrast-enhanced region on CT images
title_short Deep convolutional neural network for reduction of contrast-enhanced region on CT images
title_sort deep convolutional neural network for reduction of contrast-enhanced region on ct images
topic Regular Papers
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6805976/
https://www.ncbi.nlm.nih.gov/pubmed/31125068
http://dx.doi.org/10.1093/jrr/rrz030
work_keys_str_mv AT sumidaiori deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT magometaiki deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT kitamorihideki deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT dasindraj deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT yamaguchihajime deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT kizakihisao deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT aboshikeiko deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT yamashitakyohei deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT yamadayuji deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT seoyuji deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT isohashifumiaki deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages
AT ogawakazuhiko deepconvolutionalneuralnetworkforreductionofcontrastenhancedregiononctimages