Cargando…
Classification of head and neck cancer from PET images using convolutional neural networks
The aim of this study was to develop a convolutional neural network (CNN) for classifying positron emission tomography (PET) images of patients with and without head and neck squamous cell carcinoma (HNSCC) and other types of head and neck cancer. A PET/magnetic resonance imaging scan with (18)F-flu...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10310830/ https://www.ncbi.nlm.nih.gov/pubmed/37386289 http://dx.doi.org/10.1038/s41598-023-37603-1 |
_version_ | 1785066617237929984 |
---|---|
author | Hellström, Henri Liedes, Joonas Rainio, Oona Malaspina, Simona Kemppainen, Jukka Klén, Riku |
author_facet | Hellström, Henri Liedes, Joonas Rainio, Oona Malaspina, Simona Kemppainen, Jukka Klén, Riku |
author_sort | Hellström, Henri |
collection | PubMed |
description | The aim of this study was to develop a convolutional neural network (CNN) for classifying positron emission tomography (PET) images of patients with and without head and neck squamous cell carcinoma (HNSCC) and other types of head and neck cancer. A PET/magnetic resonance imaging scan with (18)F-fluorodeoxyglucose ((18)F-FDG) was performed for 200 head and neck cancer patients, 182 of which were diagnosed with HNSCC, and the location of cancer tumors was marked to the images with a binary mask by a medical doctor. The models were trained and tested with five-fold cross-validation with the primary data set of 1990 2D images obtained by dividing the original 3D images of 178 HNSCC patients into transaxial slices and with an additional test set with 238 images from the patients with head and neck cancer other than HNSCC. A shallow and a deep CNN were built by using the U-Net architecture for classifying the data into two groups based on whether an image contains cancer or not. The impact of data augmentation on the performance of the two CNNs was also considered. According to our results, the best model for this task in terms of area under receiver operator characteristic curve (AUC) is a deep augmented model with a median AUC of 85.1%. The four models had highest sensitivity for HNSCC tumors on the root of the tongue (median sensitivities of 83.3–97.7%), in fossa piriformis (80.2–93.3%), and in the oral cavity (70.4–81.7%). Despite the fact that the models were trained with only HNSCC data, they had also very good sensitivity for detecting follicular and papillary carcinoma of thyroid gland and mucoepidermoid carcinoma of the parotid gland (91.7–100%). |
format | Online Article Text |
id | pubmed-10310830 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-103108302023-07-01 Classification of head and neck cancer from PET images using convolutional neural networks Hellström, Henri Liedes, Joonas Rainio, Oona Malaspina, Simona Kemppainen, Jukka Klén, Riku Sci Rep Article The aim of this study was to develop a convolutional neural network (CNN) for classifying positron emission tomography (PET) images of patients with and without head and neck squamous cell carcinoma (HNSCC) and other types of head and neck cancer. A PET/magnetic resonance imaging scan with (18)F-fluorodeoxyglucose ((18)F-FDG) was performed for 200 head and neck cancer patients, 182 of which were diagnosed with HNSCC, and the location of cancer tumors was marked to the images with a binary mask by a medical doctor. The models were trained and tested with five-fold cross-validation with the primary data set of 1990 2D images obtained by dividing the original 3D images of 178 HNSCC patients into transaxial slices and with an additional test set with 238 images from the patients with head and neck cancer other than HNSCC. A shallow and a deep CNN were built by using the U-Net architecture for classifying the data into two groups based on whether an image contains cancer or not. The impact of data augmentation on the performance of the two CNNs was also considered. According to our results, the best model for this task in terms of area under receiver operator characteristic curve (AUC) is a deep augmented model with a median AUC of 85.1%. The four models had highest sensitivity for HNSCC tumors on the root of the tongue (median sensitivities of 83.3–97.7%), in fossa piriformis (80.2–93.3%), and in the oral cavity (70.4–81.7%). Despite the fact that the models were trained with only HNSCC data, they had also very good sensitivity for detecting follicular and papillary carcinoma of thyroid gland and mucoepidermoid carcinoma of the parotid gland (91.7–100%). Nature Publishing Group UK 2023-06-29 /pmc/articles/PMC10310830/ /pubmed/37386289 http://dx.doi.org/10.1038/s41598-023-37603-1 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Hellström, Henri Liedes, Joonas Rainio, Oona Malaspina, Simona Kemppainen, Jukka Klén, Riku Classification of head and neck cancer from PET images using convolutional neural networks |
title | Classification of head and neck cancer from PET images using convolutional neural networks |
title_full | Classification of head and neck cancer from PET images using convolutional neural networks |
title_fullStr | Classification of head and neck cancer from PET images using convolutional neural networks |
title_full_unstemmed | Classification of head and neck cancer from PET images using convolutional neural networks |
title_short | Classification of head and neck cancer from PET images using convolutional neural networks |
title_sort | classification of head and neck cancer from pet images using convolutional neural networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10310830/ https://www.ncbi.nlm.nih.gov/pubmed/37386289 http://dx.doi.org/10.1038/s41598-023-37603-1 |
work_keys_str_mv | AT hellstromhenri classificationofheadandneckcancerfrompetimagesusingconvolutionalneuralnetworks AT liedesjoonas classificationofheadandneckcancerfrompetimagesusingconvolutionalneuralnetworks AT rainiooona classificationofheadandneckcancerfrompetimagesusingconvolutionalneuralnetworks AT malaspinasimona classificationofheadandneckcancerfrompetimagesusingconvolutionalneuralnetworks AT kemppainenjukka classificationofheadandneckcancerfrompetimagesusingconvolutionalneuralnetworks AT klenriku classificationofheadandneckcancerfrompetimagesusingconvolutionalneuralnetworks |