Cargando…

Head and neck tumor segmentation convolutional neural network robust to missing PET/CT modalities using channel dropout

Objective. Radiation therapy for head and neck (H&N) cancer relies on accurate segmentation of the primary tumor. A robust, accurate, and automated gross tumor volume segmentation method is warranted for H&N cancer therapeutic management. The purpose of this study is to develop a novel deep...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhao, Lin-mei, Zhang, Helen, Kim, Daniel D, Ghimire, Kanchan, Hu, Rong, Kargilis, Daniel C, Tang, Lei, Meng, Shujuan, Chen, Quan, Liao, Wei-hua, Bai, Harrison, Jiao, Zhicheng, Feng, Xue
Formato: Online Artículo Texto
Lenguaje:English
Publicado: IOP Publishing 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126383/
https://www.ncbi.nlm.nih.gov/pubmed/37019119
http://dx.doi.org/10.1088/1361-6560/accac9
Descripción
Sumario:Objective. Radiation therapy for head and neck (H&N) cancer relies on accurate segmentation of the primary tumor. A robust, accurate, and automated gross tumor volume segmentation method is warranted for H&N cancer therapeutic management. The purpose of this study is to develop a novel deep learning segmentation model for H&N cancer based on independent and combined CT and FDG-PET modalities. Approach. In this study, we developed a robust deep learning-based model leveraging information from both CT and PET. We implemented a 3D U-Net architecture with 5 levels of encoding and decoding, computing model loss through deep supervision. We used a channel dropout technique to emulate different combinations of input modalities. This technique prevents potential performance issues when only one modality is available, increasing model robustness. We implemented ensemble modeling by combining two types of convolutions with differing receptive fields, conventional and dilated, to improve capture of both fine details and global information. Main Results. Our proposed methods yielded promising results, with a Dice similarity coefficient (DSC) of 0.802 when deployed on combined CT and PET, DSC of 0.610 when deployed on CT, and DSC of 0.750 when deployed on PET. Significance. Application of a channel dropout method allowed for a single model to achieve high performance when deployed on either single modality images (CT or PET) or combined modality images (CT and PET). The presented segmentation techniques are clinically relevant to applications where images from a certain modality might not always be available.