Cargando…

Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data

INTRODUCTION: Deep learning-based algorithms have demonstrated enormous performance in segmentation of medical images. We collected a dataset of multiparametric MRI and contour data acquired for use in radiosurgery, to evaluate the performance of deep convolutional neural networks (DCNN) in automati...

Descripción completa

Detalles Bibliográficos
Autores principales: Bousabarah, Khaled, Ruge, Maximilian, Brand, Julia-Sarita, Hoevels, Mauritius, Rueß, Daniel, Borggrefe, Jan, Große Hokamp, Nils, Visser-Vandewalle, Veerle, Maintz, David, Treuer, Harald, Kocher, Martin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7171921/
https://www.ncbi.nlm.nih.gov/pubmed/32312276
http://dx.doi.org/10.1186/s13014-020-01514-6
_version_ 1783524161915715584
author Bousabarah, Khaled
Ruge, Maximilian
Brand, Julia-Sarita
Hoevels, Mauritius
Rueß, Daniel
Borggrefe, Jan
Große Hokamp, Nils
Visser-Vandewalle, Veerle
Maintz, David
Treuer, Harald
Kocher, Martin
author_facet Bousabarah, Khaled
Ruge, Maximilian
Brand, Julia-Sarita
Hoevels, Mauritius
Rueß, Daniel
Borggrefe, Jan
Große Hokamp, Nils
Visser-Vandewalle, Veerle
Maintz, David
Treuer, Harald
Kocher, Martin
author_sort Bousabarah, Khaled
collection PubMed
description INTRODUCTION: Deep learning-based algorithms have demonstrated enormous performance in segmentation of medical images. We collected a dataset of multiparametric MRI and contour data acquired for use in radiosurgery, to evaluate the performance of deep convolutional neural networks (DCNN) in automatic segmentation of brain metastases (BM). METHODS: A conventional U-Net (cU-Net), a modified U-Net (moU-Net) and a U-Net trained only on BM smaller than 0.4 ml (sU-Net) were implemented. Performance was assessed on a separate test set employing sensitivity, specificity, average false positive rate (AFPR), the dice similarity coefficient (DSC), Bland-Altman analysis and the concordance correlation coefficient (CCC). RESULTS: A dataset of 509 patients (1223 BM) was split into a training set (469 pts) and a test set (40 pts). A combination of all trained networks was the most sensitive (0.82) while maintaining a specificity 0.83. The same model achieved a sensitivity of 0.97 and a specificity of 0.94 when considering only lesions larger than 0.06 ml (75% of all lesions). Type of primary cancer had no significant influence on the mean DSC per lesion (p = 0.60). Agreement between manually and automatically assessed tumor volumes as quantified by a CCC of 0.87 (95% CI, 0.77–0.93), was excellent. CONCLUSION: Using a dataset which properly captured the variation in imaging appearance observed in clinical practice, we were able to conclude that DCNNs reach clinically relevant performance for most lesions. Clinical applicability is currently limited by the size of the target lesion. Further studies should address if small targets are accurately represented in the test data.
format Online
Article
Text
id pubmed-7171921
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-71719212020-04-24 Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data Bousabarah, Khaled Ruge, Maximilian Brand, Julia-Sarita Hoevels, Mauritius Rueß, Daniel Borggrefe, Jan Große Hokamp, Nils Visser-Vandewalle, Veerle Maintz, David Treuer, Harald Kocher, Martin Radiat Oncol Research INTRODUCTION: Deep learning-based algorithms have demonstrated enormous performance in segmentation of medical images. We collected a dataset of multiparametric MRI and contour data acquired for use in radiosurgery, to evaluate the performance of deep convolutional neural networks (DCNN) in automatic segmentation of brain metastases (BM). METHODS: A conventional U-Net (cU-Net), a modified U-Net (moU-Net) and a U-Net trained only on BM smaller than 0.4 ml (sU-Net) were implemented. Performance was assessed on a separate test set employing sensitivity, specificity, average false positive rate (AFPR), the dice similarity coefficient (DSC), Bland-Altman analysis and the concordance correlation coefficient (CCC). RESULTS: A dataset of 509 patients (1223 BM) was split into a training set (469 pts) and a test set (40 pts). A combination of all trained networks was the most sensitive (0.82) while maintaining a specificity 0.83. The same model achieved a sensitivity of 0.97 and a specificity of 0.94 when considering only lesions larger than 0.06 ml (75% of all lesions). Type of primary cancer had no significant influence on the mean DSC per lesion (p = 0.60). Agreement between manually and automatically assessed tumor volumes as quantified by a CCC of 0.87 (95% CI, 0.77–0.93), was excellent. CONCLUSION: Using a dataset which properly captured the variation in imaging appearance observed in clinical practice, we were able to conclude that DCNNs reach clinically relevant performance for most lesions. Clinical applicability is currently limited by the size of the target lesion. Further studies should address if small targets are accurately represented in the test data. BioMed Central 2020-04-20 /pmc/articles/PMC7171921/ /pubmed/32312276 http://dx.doi.org/10.1186/s13014-020-01514-6 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Bousabarah, Khaled
Ruge, Maximilian
Brand, Julia-Sarita
Hoevels, Mauritius
Rueß, Daniel
Borggrefe, Jan
Große Hokamp, Nils
Visser-Vandewalle, Veerle
Maintz, David
Treuer, Harald
Kocher, Martin
Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data
title Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data
title_full Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data
title_fullStr Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data
title_full_unstemmed Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data
title_short Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data
title_sort deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7171921/
https://www.ncbi.nlm.nih.gov/pubmed/32312276
http://dx.doi.org/10.1186/s13014-020-01514-6
work_keys_str_mv AT bousabarahkhaled deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT rugemaximilian deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT brandjuliasarita deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT hoevelsmauritius deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT rueßdaniel deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT borggrefejan deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT großehokampnils deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT visservandewalleveerle deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT maintzdavid deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT treuerharald deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata
AT kochermartin deepconvolutionalneuralnetworksforautomatedsegmentationofbrainmetastasestrainedonclinicaldata