Cargando…

Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images

BACKGROUND: Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning co...

Descripción completa

Detalles Bibliográficos
Autores principales: Men, Kuo, Chen, Xinyuan, Zhang, Ye, Zhang, Tao, Dai, Jianrong, Yi, Junlin, Li, Yexiong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5770734/
https://www.ncbi.nlm.nih.gov/pubmed/29376025
http://dx.doi.org/10.3389/fonc.2017.00315
_version_ 1783293125730500608
author Men, Kuo
Chen, Xinyuan
Zhang, Ye
Zhang, Tao
Dai, Jianrong
Yi, Junlin
Li, Yexiong
author_facet Men, Kuo
Chen, Xinyuan
Zhang, Ye
Zhang, Tao
Dai, Jianrong
Yi, Junlin
Li, Yexiong
author_sort Men, Kuo
collection PubMed
description BACKGROUND: Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets. METHODS: The proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model. RESULTS: The proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively. CONCLUSION: DDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a considerable amount of editing will be required.
format Online
Article
Text
id pubmed-5770734
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-57707342018-01-26 Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images Men, Kuo Chen, Xinyuan Zhang, Ye Zhang, Tao Dai, Jianrong Yi, Junlin Li, Yexiong Front Oncol Oncology BACKGROUND: Radiotherapy is one of the main treatment methods for nasopharyngeal carcinoma (NPC). It requires exact delineation of the nasopharynx gross tumor volume (GTVnx), the metastatic lymph node gross tumor volume (GTVnd), the clinical target volume (CTV), and organs at risk in the planning computed tomography images. However, this task is time-consuming and operator dependent. In the present study, we developed an end-to-end deep deconvolutional neural network (DDNN) for segmentation of these targets. METHODS: The proposed DDNN is an end-to-end architecture enabling fast training and testing. It consists of two important components: an encoder network and a decoder network. The encoder network was used to extract the visual features of a medical image and the decoder network was used to recover the original resolution by deploying deconvolution. A total of 230 patients diagnosed with NPC stage I or stage II were included in this study. Data from 184 patients were chosen randomly as a training set to adjust the parameters of DDNN, and the remaining 46 patients were the test set to assess the performance of the model. The Dice similarity coefficient (DSC) was used to quantify the segmentation results of the GTVnx, GTVnd, and CTV. In addition, the performance of DDNN was compared with the VGG-16 model. RESULTS: The proposed DDNN method outperformed the VGG-16 in all the segmentation. The mean DSC values of DDNN were 80.9% for GTVnx, 62.3% for the GTVnd, and 82.6% for CTV, whereas VGG-16 obtained 72.3, 33.7, and 73.7% for the DSC values, respectively. CONCLUSION: DDNN can be used to segment the GTVnx and CTV accurately. The accuracy for the GTVnd segmentation was relatively low due to the considerable differences in its shape, volume, and location among patients. The accuracy is expected to increase with more training data and combination of MR images. In conclusion, DDNN has the potential to improve the consistency of contouring and streamline radiotherapy workflows, but careful human review and a considerable amount of editing will be required. Frontiers Media S.A. 2017-12-20 /pmc/articles/PMC5770734/ /pubmed/29376025 http://dx.doi.org/10.3389/fonc.2017.00315 Text en Copyright © 2017 Men, Chen, Zhang, Zhang, Dai, Yi and Li. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Oncology
Men, Kuo
Chen, Xinyuan
Zhang, Ye
Zhang, Tao
Dai, Jianrong
Yi, Junlin
Li, Yexiong
Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images
title Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images
title_full Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images
title_fullStr Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images
title_full_unstemmed Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images
title_short Deep Deconvolutional Neural Network for Target Segmentation of Nasopharyngeal Cancer in Planning Computed Tomography Images
title_sort deep deconvolutional neural network for target segmentation of nasopharyngeal cancer in planning computed tomography images
topic Oncology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5770734/
https://www.ncbi.nlm.nih.gov/pubmed/29376025
http://dx.doi.org/10.3389/fonc.2017.00315
work_keys_str_mv AT menkuo deepdeconvolutionalneuralnetworkfortargetsegmentationofnasopharyngealcancerinplanningcomputedtomographyimages
AT chenxinyuan deepdeconvolutionalneuralnetworkfortargetsegmentationofnasopharyngealcancerinplanningcomputedtomographyimages
AT zhangye deepdeconvolutionalneuralnetworkfortargetsegmentationofnasopharyngealcancerinplanningcomputedtomographyimages
AT zhangtao deepdeconvolutionalneuralnetworkfortargetsegmentationofnasopharyngealcancerinplanningcomputedtomographyimages
AT daijianrong deepdeconvolutionalneuralnetworkfortargetsegmentationofnasopharyngealcancerinplanningcomputedtomographyimages
AT yijunlin deepdeconvolutionalneuralnetworkfortargetsegmentationofnasopharyngealcancerinplanningcomputedtomographyimages
AT liyexiong deepdeconvolutionalneuralnetworkfortargetsegmentationofnasopharyngealcancerinplanningcomputedtomographyimages