Cargando…

Inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest

BACKGROUND: Head computed tomography (CT) is used to predict neurological outcome after cardiac arrest (CA). The current reference standard includes quantitative image analysis by a neuroradiologist to determine the Gray-White-Matter Ratio (GWR) which is calculated via the manual measurement of radi...

Descripción completa

Detalles Bibliográficos
Autores principales: Kenda, Martin, Cheng, Zhuo, Guettler, Christopher, Storm, Christian, Ploner, Christoph J., Leithner, Christoph, Scheel, Michael
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9606648/
https://www.ncbi.nlm.nih.gov/pubmed/36313501
http://dx.doi.org/10.3389/fneur.2022.990208
_version_ 1784818341858246656
author Kenda, Martin
Cheng, Zhuo
Guettler, Christopher
Storm, Christian
Ploner, Christoph J.
Leithner, Christoph
Scheel, Michael
author_facet Kenda, Martin
Cheng, Zhuo
Guettler, Christopher
Storm, Christian
Ploner, Christoph J.
Leithner, Christoph
Scheel, Michael
author_sort Kenda, Martin
collection PubMed
description BACKGROUND: Head computed tomography (CT) is used to predict neurological outcome after cardiac arrest (CA). The current reference standard includes quantitative image analysis by a neuroradiologist to determine the Gray-White-Matter Ratio (GWR) which is calculated via the manual measurement of radiodensity in different brain regions. Recently, automated analysis methods have been introduced. There is limited data on the Inter-rater agreement of both methods. METHODS: Three blinded human raters (neuroradiologist, neurologist, student) with different levels of clinical experience retrospectively assessed the Gray-White-Matter Ratio (GWR) in head CTs of 95 CA patients. GWR was also quantified by a recently published computer algorithm that uses coregistration with standardized brain spaces to identify regions of interest (ROIs). We calculated intraclass correlation (ICC) for inter-rater agreement between human and computer raters as well as area under the curve (AUC) and sensitivity/specificity for poor outcome prognostication. RESULTS: Inter-rater agreement on GWR was very good (ICC 0.82–0.84) between all three human raters across different levels of expertise and between the computer algorithm and neuroradiologist (ICC 0.83; 95% CI 0.78–0.88). Despite high overall agreement, we observed considerable, clinically relevant deviations of GWR measurements (up to 0.24) in individual patients. In our cohort, at a GWR threshold of 1.10, this did not lead to any false poor neurological outcome prediction. CONCLUSION: Human and computer raters demonstrated high overall agreement in GWR determination in head CTs after CA. The clinically relevant deviations of GWR measurement in individual patients underscore the necessity of additional qualitative evaluation and integration of head CT findings into a multimodal approach to prognostication of neurological outcome after CA.
format Online
Article
Text
id pubmed-9606648
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-96066482022-10-28 Inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest Kenda, Martin Cheng, Zhuo Guettler, Christopher Storm, Christian Ploner, Christoph J. Leithner, Christoph Scheel, Michael Front Neurol Neurology BACKGROUND: Head computed tomography (CT) is used to predict neurological outcome after cardiac arrest (CA). The current reference standard includes quantitative image analysis by a neuroradiologist to determine the Gray-White-Matter Ratio (GWR) which is calculated via the manual measurement of radiodensity in different brain regions. Recently, automated analysis methods have been introduced. There is limited data on the Inter-rater agreement of both methods. METHODS: Three blinded human raters (neuroradiologist, neurologist, student) with different levels of clinical experience retrospectively assessed the Gray-White-Matter Ratio (GWR) in head CTs of 95 CA patients. GWR was also quantified by a recently published computer algorithm that uses coregistration with standardized brain spaces to identify regions of interest (ROIs). We calculated intraclass correlation (ICC) for inter-rater agreement between human and computer raters as well as area under the curve (AUC) and sensitivity/specificity for poor outcome prognostication. RESULTS: Inter-rater agreement on GWR was very good (ICC 0.82–0.84) between all three human raters across different levels of expertise and between the computer algorithm and neuroradiologist (ICC 0.83; 95% CI 0.78–0.88). Despite high overall agreement, we observed considerable, clinically relevant deviations of GWR measurements (up to 0.24) in individual patients. In our cohort, at a GWR threshold of 1.10, this did not lead to any false poor neurological outcome prediction. CONCLUSION: Human and computer raters demonstrated high overall agreement in GWR determination in head CTs after CA. The clinically relevant deviations of GWR measurement in individual patients underscore the necessity of additional qualitative evaluation and integration of head CT findings into a multimodal approach to prognostication of neurological outcome after CA. Frontiers Media S.A. 2022-10-13 /pmc/articles/PMC9606648/ /pubmed/36313501 http://dx.doi.org/10.3389/fneur.2022.990208 Text en Copyright © 2022 Kenda, Cheng, Guettler, Storm, Ploner, Leithner and Scheel. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neurology
Kenda, Martin
Cheng, Zhuo
Guettler, Christopher
Storm, Christian
Ploner, Christoph J.
Leithner, Christoph
Scheel, Michael
Inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest
title Inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest
title_full Inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest
title_fullStr Inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest
title_full_unstemmed Inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest
title_short Inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest
title_sort inter-rater agreement between humans and computer in quantitative assessment of computed tomography after cardiac arrest
topic Neurology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9606648/
https://www.ncbi.nlm.nih.gov/pubmed/36313501
http://dx.doi.org/10.3389/fneur.2022.990208
work_keys_str_mv AT kendamartin interrateragreementbetweenhumansandcomputerinquantitativeassessmentofcomputedtomographyaftercardiacarrest
AT chengzhuo interrateragreementbetweenhumansandcomputerinquantitativeassessmentofcomputedtomographyaftercardiacarrest
AT guettlerchristopher interrateragreementbetweenhumansandcomputerinquantitativeassessmentofcomputedtomographyaftercardiacarrest
AT stormchristian interrateragreementbetweenhumansandcomputerinquantitativeassessmentofcomputedtomographyaftercardiacarrest
AT plonerchristophj interrateragreementbetweenhumansandcomputerinquantitativeassessmentofcomputedtomographyaftercardiacarrest
AT leithnerchristoph interrateragreementbetweenhumansandcomputerinquantitativeassessmentofcomputedtomographyaftercardiacarrest
AT scheelmichael interrateragreementbetweenhumansandcomputerinquantitativeassessmentofcomputedtomographyaftercardiacarrest