Cargando…
Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images
PURPOSE: The purpose of this work was to evaluate the performance of X‐Net, a multiview deep learning architecture, to automatically label vertebral levels (S2‐C1) in palliative radiotherapy simulation CT scans. METHODS: For each patient CT scan, our automated approach 1) segmented spinal canal usin...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley and Sons Inc.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7756475/ https://www.ncbi.nlm.nih.gov/pubmed/33459402 http://dx.doi.org/10.1002/mp.14415 |
_version_ | 1783626550525034496 |
---|---|
author | Netherton, Tucker J. Rhee, Dong Joo Cardenas, Carlos E. Chung, Caroline Klopp, Ann H. Peterson, Christine B. Howell, Rebecca M. Balter, Peter A. Court, Laurence E. |
author_facet | Netherton, Tucker J. Rhee, Dong Joo Cardenas, Carlos E. Chung, Caroline Klopp, Ann H. Peterson, Christine B. Howell, Rebecca M. Balter, Peter A. Court, Laurence E. |
author_sort | Netherton, Tucker J. |
collection | PubMed |
description | PURPOSE: The purpose of this work was to evaluate the performance of X‐Net, a multiview deep learning architecture, to automatically label vertebral levels (S2‐C1) in palliative radiotherapy simulation CT scans. METHODS: For each patient CT scan, our automated approach 1) segmented spinal canal using a convolutional‐neural network (CNN), 2) formed sagittal and coronal intensity projection pairs, 3) labeled vertebral levels with X‐Net, and 4) detected irregular intervertebral spacing using an analytic methodology. The spinal canal CNN was trained via fivefold cross validation using 1,966 simulation CT scans and evaluated on 330 CT scans. After labeling vertebral levels (S2‐C1) in 897 palliative radiotherapy simulation CT scans, a volume of interest surrounding the spinal canal in each patient's CT scan was converted into sagittal and coronal intensity projection image pairs. Then, intensity projection image pairs were augmented and used to train X‐Net to automatically label vertebral levels using fivefold cross validation (n = 803). Prior to testing upon the final test set (n = 94), CT scans of patients with anatomical abnormalities, surgical implants, or other atypical features from the final test set were placed in an outlier group (n = 20), whereas those without these features were placed in a normative group (n = 74). The performance of X‐Net, X‐Net Ensemble, and another leading vertebral labeling architecture (Btrfly Net) was evaluated on both groups using identification rate, localization error, and other metrics. The performance of our approach was also evaluated on the MICCAI 2014 test dataset (n = 60). Finally, a method to detect irregular intervertebral spacing was created based on the rate of change in spacing between predicted vertebral body locations and was also evaluated using the final test set. Receiver operating characteristic analysis was used to investigate the performance of the method to detect irregular intervertebral spacing. RESULTS: The spinal canal architecture yielded centroid coordinates spanning S2‐C1 with submillimeter accuracy (mean ± standard deviation, 0.399 ± 0.299 mm; n = 330 patients) and was robust in the localization of spinal canal centroid to surgical implants and widespread metastases. Cross‐validation testing of X‐Net for vertebral labeling revealed that the deep learning model performance (F(1) score, precision, and sensitivity) improved with CT scan length. The X‐Net, X‐Net Ensemble, and Btrfly Net mean identification rates and localization errors were 92.4% and 2.3 mm, 94.2% and 2.2 mm, and 90.5% and 3.4 mm, respectively, in the final test set and 96.7% and 2.2 mm, 96.9% and 2.0 mm, and 94.8% and 3.3 mm, respectively, within the normative group of the final test set. The X‐Net Ensemble yielded the highest percentage of patients (94%) having all vertebral bodies identified correctly in the final test set when the three most inferior and superior vertebral bodies were excluded from the CT scan. The method used to detect labeling failures had 67% sensitivity and 95% specificity when combined with the X‐Net Ensemble and flagged five of six patients with atypical vertebral counts (additional thoracic (T13), additional lumbar (L6) or only four lumbar vertebrae). Mean identification rate on the MICCAI 2014 dataset using an X‐Net Ensemble was increased from 86.8% to 91.3% through the use of transfer learning and obtained state‐of‐the‐art results for various regions of the spine. CONCLUSIONS: We trained X‐Net, our unique convolutional neural network, to automatically label vertebral levels from S2 to C1 on palliative radiotherapy CT images and found that an ensemble of X‐Net models had high vertebral body identification rate (94.2%) and small localization errors (2.2 ± 1.8 mm). In addition, our transfer learning approach achieved state‐of‐the‐art results on a well‐known benchmark dataset with high identification rate (91.3%) and low localization error (3.3 mm ± 2.7 mm). When we pre‐screened radiotherapy CT images for the presence of hardware, surgical implants, or other anatomic abnormalities prior to the use of X‐Net, it labeled the spine correctly in more than 97% of patients and 94% of patients when scans were not prescreened. Automatically generated labels are robust to widespread vertebral metastases and surgical implants and our method to detect labeling failures based on neighborhood intervertebral spacing can reliably identify patients with an additional lumbar or thoracic vertebral body. |
format | Online Article Text |
id | pubmed-7756475 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | John Wiley and Sons Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-77564752020-12-28 Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images Netherton, Tucker J. Rhee, Dong Joo Cardenas, Carlos E. Chung, Caroline Klopp, Ann H. Peterson, Christine B. Howell, Rebecca M. Balter, Peter A. Court, Laurence E. Med Phys QUANTITATIVE IMAGING AND IMAGE PROCESSING PURPOSE: The purpose of this work was to evaluate the performance of X‐Net, a multiview deep learning architecture, to automatically label vertebral levels (S2‐C1) in palliative radiotherapy simulation CT scans. METHODS: For each patient CT scan, our automated approach 1) segmented spinal canal using a convolutional‐neural network (CNN), 2) formed sagittal and coronal intensity projection pairs, 3) labeled vertebral levels with X‐Net, and 4) detected irregular intervertebral spacing using an analytic methodology. The spinal canal CNN was trained via fivefold cross validation using 1,966 simulation CT scans and evaluated on 330 CT scans. After labeling vertebral levels (S2‐C1) in 897 palliative radiotherapy simulation CT scans, a volume of interest surrounding the spinal canal in each patient's CT scan was converted into sagittal and coronal intensity projection image pairs. Then, intensity projection image pairs were augmented and used to train X‐Net to automatically label vertebral levels using fivefold cross validation (n = 803). Prior to testing upon the final test set (n = 94), CT scans of patients with anatomical abnormalities, surgical implants, or other atypical features from the final test set were placed in an outlier group (n = 20), whereas those without these features were placed in a normative group (n = 74). The performance of X‐Net, X‐Net Ensemble, and another leading vertebral labeling architecture (Btrfly Net) was evaluated on both groups using identification rate, localization error, and other metrics. The performance of our approach was also evaluated on the MICCAI 2014 test dataset (n = 60). Finally, a method to detect irregular intervertebral spacing was created based on the rate of change in spacing between predicted vertebral body locations and was also evaluated using the final test set. Receiver operating characteristic analysis was used to investigate the performance of the method to detect irregular intervertebral spacing. RESULTS: The spinal canal architecture yielded centroid coordinates spanning S2‐C1 with submillimeter accuracy (mean ± standard deviation, 0.399 ± 0.299 mm; n = 330 patients) and was robust in the localization of spinal canal centroid to surgical implants and widespread metastases. Cross‐validation testing of X‐Net for vertebral labeling revealed that the deep learning model performance (F(1) score, precision, and sensitivity) improved with CT scan length. The X‐Net, X‐Net Ensemble, and Btrfly Net mean identification rates and localization errors were 92.4% and 2.3 mm, 94.2% and 2.2 mm, and 90.5% and 3.4 mm, respectively, in the final test set and 96.7% and 2.2 mm, 96.9% and 2.0 mm, and 94.8% and 3.3 mm, respectively, within the normative group of the final test set. The X‐Net Ensemble yielded the highest percentage of patients (94%) having all vertebral bodies identified correctly in the final test set when the three most inferior and superior vertebral bodies were excluded from the CT scan. The method used to detect labeling failures had 67% sensitivity and 95% specificity when combined with the X‐Net Ensemble and flagged five of six patients with atypical vertebral counts (additional thoracic (T13), additional lumbar (L6) or only four lumbar vertebrae). Mean identification rate on the MICCAI 2014 dataset using an X‐Net Ensemble was increased from 86.8% to 91.3% through the use of transfer learning and obtained state‐of‐the‐art results for various regions of the spine. CONCLUSIONS: We trained X‐Net, our unique convolutional neural network, to automatically label vertebral levels from S2 to C1 on palliative radiotherapy CT images and found that an ensemble of X‐Net models had high vertebral body identification rate (94.2%) and small localization errors (2.2 ± 1.8 mm). In addition, our transfer learning approach achieved state‐of‐the‐art results on a well‐known benchmark dataset with high identification rate (91.3%) and low localization error (3.3 mm ± 2.7 mm). When we pre‐screened radiotherapy CT images for the presence of hardware, surgical implants, or other anatomic abnormalities prior to the use of X‐Net, it labeled the spine correctly in more than 97% of patients and 94% of patients when scans were not prescreened. Automatically generated labels are robust to widespread vertebral metastases and surgical implants and our method to detect labeling failures based on neighborhood intervertebral spacing can reliably identify patients with an additional lumbar or thoracic vertebral body. John Wiley and Sons Inc. 2020-09-15 2020-11 /pmc/articles/PMC7756475/ /pubmed/33459402 http://dx.doi.org/10.1002/mp.14415 Text en © 2020 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine. This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | QUANTITATIVE IMAGING AND IMAGE PROCESSING Netherton, Tucker J. Rhee, Dong Joo Cardenas, Carlos E. Chung, Caroline Klopp, Ann H. Peterson, Christine B. Howell, Rebecca M. Balter, Peter A. Court, Laurence E. Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images |
title | Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images |
title_full | Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images |
title_fullStr | Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images |
title_full_unstemmed | Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images |
title_short | Evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation CT images |
title_sort | evaluation of a multiview architecture for automatic vertebral labeling of palliative radiotherapy simulation ct images |
topic | QUANTITATIVE IMAGING AND IMAGE PROCESSING |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7756475/ https://www.ncbi.nlm.nih.gov/pubmed/33459402 http://dx.doi.org/10.1002/mp.14415 |
work_keys_str_mv | AT nethertontuckerj evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages AT rheedongjoo evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages AT cardenascarlose evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages AT chungcaroline evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages AT kloppannh evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages AT petersonchristineb evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages AT howellrebeccam evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages AT balterpetera evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages AT courtlaurencee evaluationofamultiviewarchitectureforautomaticvertebrallabelingofpalliativeradiotherapysimulationctimages |