Cargando…

Brain tumour segmentation with incomplete imaging data

Progress in neuro-oncology is increasingly recognized to be obstructed by the marked heterogeneity—genetic, pathological, and clinical—of brain tumours. If the treatment susceptibilities and outcomes of individual patients differ widely, determined by the interactions of many multimodal characterist...

Descripción completa

Detalles Bibliográficos
Autores principales: Ruffle, James K, Mohinta, Samia, Gray, Robert, Hyare, Harpreet, Nachev, Parashkev
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10144694/
https://www.ncbi.nlm.nih.gov/pubmed/37124946
http://dx.doi.org/10.1093/braincomms/fcad118
_version_ 1785034157782466560
author Ruffle, James K
Mohinta, Samia
Gray, Robert
Hyare, Harpreet
Nachev, Parashkev
author_facet Ruffle, James K
Mohinta, Samia
Gray, Robert
Hyare, Harpreet
Nachev, Parashkev
author_sort Ruffle, James K
collection PubMed
description Progress in neuro-oncology is increasingly recognized to be obstructed by the marked heterogeneity—genetic, pathological, and clinical—of brain tumours. If the treatment susceptibilities and outcomes of individual patients differ widely, determined by the interactions of many multimodal characteristics, then large-scale, fully-inclusive, richly phenotyped data—including imaging—will be needed to predict them at the individual level. Such data can realistically be acquired only in the routine clinical stream, where its quality is inevitably degraded by the constraints of real-world clinical care. Although contemporary machine learning could theoretically provide a solution to this task, especially in the domain of imaging, its ability to cope with realistic, incomplete, low-quality data is yet to be determined. In the largest and most comprehensive study of its kind, applying state-of-the-art brain tumour segmentation models to large scale, multi-site MRI data of 1251 individuals, here we quantify the comparative fidelity of automated segmentation models drawn from MR data replicating the various levels of completeness observed in real life. We demonstrate that models trained on incomplete data can segment lesions very well, often equivalently to those trained on the full completement of images, exhibiting Dice coefficients of 0.907 (single sequence) to 0.945 (complete set) for whole tumours and 0.701 (single sequence) to 0.891 (complete set) for component tissue types. This finding opens the door both to the application of segmentation models to large-scale historical data, for the purpose of building treatment and outcome predictive models, and their application to real-world clinical care. We further ascertain that segmentation models can accurately detect enhancing tumour in the absence of contrast-enhancing imaging, quantifying the burden of enhancing tumour with an R(2) > 0.97, varying negligibly with lesion morphology. Such models can quantify enhancing tumour without the administration of intravenous contrast, inviting a revision of the notion of tumour enhancement if the same information can be extracted without contrast-enhanced imaging. Our analysis includes validation on a heterogeneous, real-world 50 patient sample of brain tumour imaging acquired over the last 15 years at our tertiary centre, demonstrating maintained accuracy even on non-isotropic MRI acquisitions, or even on complex post-operative imaging with tumour recurrence. This work substantially extends the translational opportunity for quantitative analysis to clinical situations where the full complement of sequences is not available and potentially enables the characterization of contrast-enhanced regions where contrast administration is infeasible or undesirable.
format Online
Article
Text
id pubmed-10144694
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-101446942023-04-29 Brain tumour segmentation with incomplete imaging data Ruffle, James K Mohinta, Samia Gray, Robert Hyare, Harpreet Nachev, Parashkev Brain Commun Original Article Progress in neuro-oncology is increasingly recognized to be obstructed by the marked heterogeneity—genetic, pathological, and clinical—of brain tumours. If the treatment susceptibilities and outcomes of individual patients differ widely, determined by the interactions of many multimodal characteristics, then large-scale, fully-inclusive, richly phenotyped data—including imaging—will be needed to predict them at the individual level. Such data can realistically be acquired only in the routine clinical stream, where its quality is inevitably degraded by the constraints of real-world clinical care. Although contemporary machine learning could theoretically provide a solution to this task, especially in the domain of imaging, its ability to cope with realistic, incomplete, low-quality data is yet to be determined. In the largest and most comprehensive study of its kind, applying state-of-the-art brain tumour segmentation models to large scale, multi-site MRI data of 1251 individuals, here we quantify the comparative fidelity of automated segmentation models drawn from MR data replicating the various levels of completeness observed in real life. We demonstrate that models trained on incomplete data can segment lesions very well, often equivalently to those trained on the full completement of images, exhibiting Dice coefficients of 0.907 (single sequence) to 0.945 (complete set) for whole tumours and 0.701 (single sequence) to 0.891 (complete set) for component tissue types. This finding opens the door both to the application of segmentation models to large-scale historical data, for the purpose of building treatment and outcome predictive models, and their application to real-world clinical care. We further ascertain that segmentation models can accurately detect enhancing tumour in the absence of contrast-enhancing imaging, quantifying the burden of enhancing tumour with an R(2) > 0.97, varying negligibly with lesion morphology. Such models can quantify enhancing tumour without the administration of intravenous contrast, inviting a revision of the notion of tumour enhancement if the same information can be extracted without contrast-enhanced imaging. Our analysis includes validation on a heterogeneous, real-world 50 patient sample of brain tumour imaging acquired over the last 15 years at our tertiary centre, demonstrating maintained accuracy even on non-isotropic MRI acquisitions, or even on complex post-operative imaging with tumour recurrence. This work substantially extends the translational opportunity for quantitative analysis to clinical situations where the full complement of sequences is not available and potentially enables the characterization of contrast-enhanced regions where contrast administration is infeasible or undesirable. Oxford University Press 2023-04-28 /pmc/articles/PMC10144694/ /pubmed/37124946 http://dx.doi.org/10.1093/braincomms/fcad118 Text en © The Author(s) 2023. Published by Oxford University Press on behalf of the Guarantors of Brain. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Article
Ruffle, James K
Mohinta, Samia
Gray, Robert
Hyare, Harpreet
Nachev, Parashkev
Brain tumour segmentation with incomplete imaging data
title Brain tumour segmentation with incomplete imaging data
title_full Brain tumour segmentation with incomplete imaging data
title_fullStr Brain tumour segmentation with incomplete imaging data
title_full_unstemmed Brain tumour segmentation with incomplete imaging data
title_short Brain tumour segmentation with incomplete imaging data
title_sort brain tumour segmentation with incomplete imaging data
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10144694/
https://www.ncbi.nlm.nih.gov/pubmed/37124946
http://dx.doi.org/10.1093/braincomms/fcad118
work_keys_str_mv AT rufflejamesk braintumoursegmentationwithincompleteimagingdata
AT mohintasamia braintumoursegmentationwithincompleteimagingdata
AT grayrobert braintumoursegmentationwithincompleteimagingdata
AT hyareharpreet braintumoursegmentationwithincompleteimagingdata
AT nachevparashkev braintumoursegmentationwithincompleteimagingdata