Cargando…

Intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to RECIST 1.1

BACKGROUND: The assessment of cancer treatment in oncological clinical trials is usually based on serial measurements of tumours’ size according to the Response Evaluation Criteria in Solid Tumours (RECIST) guidelines. The aim of our study was to evaluate the variability of measurements of target le...

Descripción completa

Detalles Bibliográficos
Autores principales: Muenzel, Daniela, Engels, Heinz-Peter, Bruegel, Melanie, Kehl, Victoria, Rummeny, Ernst J., Metz, Stephan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Versita, Warsaw 2012
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3423763/
https://www.ncbi.nlm.nih.gov/pubmed/22933974
http://dx.doi.org/10.2478/v10019-012-0009-z
_version_ 1782241146275102720
author Muenzel, Daniela
Engels, Heinz-Peter
Bruegel, Melanie
Kehl, Victoria
Rummeny, Ernst J.
Metz, Stephan
author_facet Muenzel, Daniela
Engels, Heinz-Peter
Bruegel, Melanie
Kehl, Victoria
Rummeny, Ernst J.
Metz, Stephan
author_sort Muenzel, Daniela
collection PubMed
description BACKGROUND: The assessment of cancer treatment in oncological clinical trials is usually based on serial measurements of tumours’ size according to the Response Evaluation Criteria in Solid Tumours (RECIST) guidelines. The aim of our study was to evaluate the variability of measurements of target lesions by readers as well as the impact on response evaluation, workflow and reporting. PATIENTS AND METHODS: Twenty oncologic patients were included to the study with CT examinations from thorax to pelvis performed at a 64 slices CT scanner. Four readers defined and measured the size of target lesions independently at baseline and follow-up with PACS (Picture Archiving and Communication System) and LMS (Lesion Management Solutions, Median technologies, Valbonne Sophia Antipolis, France), according to the RECIST 1.1 criteria. Variability in measurements using PACS or LMS software was established with the Bland and Altman approach. The inter- and intra-observer variabilities were calculated for identical lesions and the overall response per case was determined. In addition, time required for evaluation and reporting in each case was recorded. RESULTS: For single lesions, the median intra-observer variability ranged from 4.9–9.6% (mean 5.9%) and the median inter-observer variability from 4.3–11.4% (mean 7.1%), respecting different evaluation time points, image systems and observers. Nevertheless, the variability in change of Δ sum longest diameter (LD), mandatory for classification of the overall response, was 24%. The overall response evaluation assessed by a single respectively different observer was discrepant in 6.3% respectively 12% of the cases compared with the mean results of multiple observers. The mean case evaluation time was 286s vs. 228s at baseline and 267s vs. 196s at follow-up for PACS and LMS, respectively. CONCLUSIONS: Uni-dimensional measurements of target lesions show low intra- and inter-observer variabilities, but the high variability in change of Δ sum LD shows the potential for misclassification of the overall response according to the RECIST 1.1 guidelines. Nevertheless, the reproducibility of RECIST reporting can be improved for the case assessment by a single observer and by mean results of multiple observers. Case-based evaluation time was shortened up to 27% using custom software.
format Online
Article
Text
id pubmed-3423763
institution National Center for Biotechnology Information
language English
publishDate 2012
publisher Versita, Warsaw
record_format MEDLINE/PubMed
spelling pubmed-34237632012-08-29 Intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to RECIST 1.1 Muenzel, Daniela Engels, Heinz-Peter Bruegel, Melanie Kehl, Victoria Rummeny, Ernst J. Metz, Stephan Radiol Oncol Research Article BACKGROUND: The assessment of cancer treatment in oncological clinical trials is usually based on serial measurements of tumours’ size according to the Response Evaluation Criteria in Solid Tumours (RECIST) guidelines. The aim of our study was to evaluate the variability of measurements of target lesions by readers as well as the impact on response evaluation, workflow and reporting. PATIENTS AND METHODS: Twenty oncologic patients were included to the study with CT examinations from thorax to pelvis performed at a 64 slices CT scanner. Four readers defined and measured the size of target lesions independently at baseline and follow-up with PACS (Picture Archiving and Communication System) and LMS (Lesion Management Solutions, Median technologies, Valbonne Sophia Antipolis, France), according to the RECIST 1.1 criteria. Variability in measurements using PACS or LMS software was established with the Bland and Altman approach. The inter- and intra-observer variabilities were calculated for identical lesions and the overall response per case was determined. In addition, time required for evaluation and reporting in each case was recorded. RESULTS: For single lesions, the median intra-observer variability ranged from 4.9–9.6% (mean 5.9%) and the median inter-observer variability from 4.3–11.4% (mean 7.1%), respecting different evaluation time points, image systems and observers. Nevertheless, the variability in change of Δ sum longest diameter (LD), mandatory for classification of the overall response, was 24%. The overall response evaluation assessed by a single respectively different observer was discrepant in 6.3% respectively 12% of the cases compared with the mean results of multiple observers. The mean case evaluation time was 286s vs. 228s at baseline and 267s vs. 196s at follow-up for PACS and LMS, respectively. CONCLUSIONS: Uni-dimensional measurements of target lesions show low intra- and inter-observer variabilities, but the high variability in change of Δ sum LD shows the potential for misclassification of the overall response according to the RECIST 1.1 guidelines. Nevertheless, the reproducibility of RECIST reporting can be improved for the case assessment by a single observer and by mean results of multiple observers. Case-based evaluation time was shortened up to 27% using custom software. Versita, Warsaw 2012-01-02 /pmc/articles/PMC3423763/ /pubmed/22933974 http://dx.doi.org/10.2478/v10019-012-0009-z Text en Copyright © by Association of Radiology & Oncology http://creativecommons.org/licenses/by/3.0 This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
spellingShingle Research Article
Muenzel, Daniela
Engels, Heinz-Peter
Bruegel, Melanie
Kehl, Victoria
Rummeny, Ernst J.
Metz, Stephan
Intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to RECIST 1.1
title Intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to RECIST 1.1
title_full Intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to RECIST 1.1
title_fullStr Intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to RECIST 1.1
title_full_unstemmed Intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to RECIST 1.1
title_short Intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to RECIST 1.1
title_sort intra- and inter-observer variability in measurement of target lesions: implication on response evaluation according to recist 1.1
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3423763/
https://www.ncbi.nlm.nih.gov/pubmed/22933974
http://dx.doi.org/10.2478/v10019-012-0009-z
work_keys_str_mv AT muenzeldaniela intraandinterobservervariabilityinmeasurementoftargetlesionsimplicationonresponseevaluationaccordingtorecist11
AT engelsheinzpeter intraandinterobservervariabilityinmeasurementoftargetlesionsimplicationonresponseevaluationaccordingtorecist11
AT bruegelmelanie intraandinterobservervariabilityinmeasurementoftargetlesionsimplicationonresponseevaluationaccordingtorecist11
AT kehlvictoria intraandinterobservervariabilityinmeasurementoftargetlesionsimplicationonresponseevaluationaccordingtorecist11
AT rummenyernstj intraandinterobservervariabilityinmeasurementoftargetlesionsimplicationonresponseevaluationaccordingtorecist11
AT metzstephan intraandinterobservervariabilityinmeasurementoftargetlesionsimplicationonresponseevaluationaccordingtorecist11