Cargando…
The Value of Source Data Verification in a Cancer Clinical Trial
BACKGROUND: Source data verification (SDV) is a resource intensive method of quality assurance frequently used in clinical trials. There is no empirical evidence to suggest that SDV would impact on comparative treatment effect results from a clinical trial. METHODS: Data discrepancies and comparativ...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2012
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3520949/ https://www.ncbi.nlm.nih.gov/pubmed/23251597 http://dx.doi.org/10.1371/journal.pone.0051623 |
_version_ | 1782252868057694208 |
---|---|
author | Tudur Smith, Catrin Stocken, Deborah D. Dunn, Janet Cox, Trevor Ghaneh, Paula Cunningham, David Neoptolemos, John P. |
author_facet | Tudur Smith, Catrin Stocken, Deborah D. Dunn, Janet Cox, Trevor Ghaneh, Paula Cunningham, David Neoptolemos, John P. |
author_sort | Tudur Smith, Catrin |
collection | PubMed |
description | BACKGROUND: Source data verification (SDV) is a resource intensive method of quality assurance frequently used in clinical trials. There is no empirical evidence to suggest that SDV would impact on comparative treatment effect results from a clinical trial. METHODS: Data discrepancies and comparative treatment effects obtained following 100% SDV were compared to those based on data without SDV. Overall survival (OS) and Progression-free survival (PFS) were compared using Kaplan-Meier curves, log-rank tests and Cox models. Tumour response classifications and comparative treatment Odds Ratios (ORs) for the outcome objective response rate, and number of Serious Adverse Events (SAEs) were compared. OS estimates based on SDV data were compared against estimates obtained from centrally monitored data. FINDINGS: Data discrepancies were identified between different monitoring procedures for the majority of variables examined, with some variation in discrepancy rates. There were no systematic patterns to discrepancies and their impact was negligible on OS, the primary outcome of the trial (HR (95% CI): 1.18(0.99 to 1.41), p = 0.064 with 100% SDV; 1.18(0.99 to 1.42), p = 0.068 without SDV; 1.18(0.99 to 1.40), p = 0.073 with central monitoring). Results were similar for PFS. More extreme discrepancies were found for the subjective outcome overall objective response (OR (95% CI): 1.67(1.04 to 2.68), p = 0.03 with 100% SDV; 2.45(1.49 to 4.04), p = 0.0003 without any SDV) which was mostly due to differing CT scans. INTERPRETATION: Quality assurance methods used in clinical trials should be informed by empirical evidence. In this empirical comparison, SDV was expensive and identified random errors that made little impact on results and clinical conclusions of the trial. Central monitoring using an external data source was a more efficient approach for the primary outcome of OS. For the subjective outcome objective response, an independent blinded review committee and tracking system to monitor missing scan data could be more efficient than SDV. |
format | Online Article Text |
id | pubmed-3520949 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2012 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-35209492012-12-18 The Value of Source Data Verification in a Cancer Clinical Trial Tudur Smith, Catrin Stocken, Deborah D. Dunn, Janet Cox, Trevor Ghaneh, Paula Cunningham, David Neoptolemos, John P. PLoS One Research Article BACKGROUND: Source data verification (SDV) is a resource intensive method of quality assurance frequently used in clinical trials. There is no empirical evidence to suggest that SDV would impact on comparative treatment effect results from a clinical trial. METHODS: Data discrepancies and comparative treatment effects obtained following 100% SDV were compared to those based on data without SDV. Overall survival (OS) and Progression-free survival (PFS) were compared using Kaplan-Meier curves, log-rank tests and Cox models. Tumour response classifications and comparative treatment Odds Ratios (ORs) for the outcome objective response rate, and number of Serious Adverse Events (SAEs) were compared. OS estimates based on SDV data were compared against estimates obtained from centrally monitored data. FINDINGS: Data discrepancies were identified between different monitoring procedures for the majority of variables examined, with some variation in discrepancy rates. There were no systematic patterns to discrepancies and their impact was negligible on OS, the primary outcome of the trial (HR (95% CI): 1.18(0.99 to 1.41), p = 0.064 with 100% SDV; 1.18(0.99 to 1.42), p = 0.068 without SDV; 1.18(0.99 to 1.40), p = 0.073 with central monitoring). Results were similar for PFS. More extreme discrepancies were found for the subjective outcome overall objective response (OR (95% CI): 1.67(1.04 to 2.68), p = 0.03 with 100% SDV; 2.45(1.49 to 4.04), p = 0.0003 without any SDV) which was mostly due to differing CT scans. INTERPRETATION: Quality assurance methods used in clinical trials should be informed by empirical evidence. In this empirical comparison, SDV was expensive and identified random errors that made little impact on results and clinical conclusions of the trial. Central monitoring using an external data source was a more efficient approach for the primary outcome of OS. For the subjective outcome objective response, an independent blinded review committee and tracking system to monitor missing scan data could be more efficient than SDV. Public Library of Science 2012-12-12 /pmc/articles/PMC3520949/ /pubmed/23251597 http://dx.doi.org/10.1371/journal.pone.0051623 Text en © 2012 Tudur Smith et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article Tudur Smith, Catrin Stocken, Deborah D. Dunn, Janet Cox, Trevor Ghaneh, Paula Cunningham, David Neoptolemos, John P. The Value of Source Data Verification in a Cancer Clinical Trial |
title | The Value of Source Data Verification in a Cancer Clinical Trial |
title_full | The Value of Source Data Verification in a Cancer Clinical Trial |
title_fullStr | The Value of Source Data Verification in a Cancer Clinical Trial |
title_full_unstemmed | The Value of Source Data Verification in a Cancer Clinical Trial |
title_short | The Value of Source Data Verification in a Cancer Clinical Trial |
title_sort | value of source data verification in a cancer clinical trial |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3520949/ https://www.ncbi.nlm.nih.gov/pubmed/23251597 http://dx.doi.org/10.1371/journal.pone.0051623 |
work_keys_str_mv | AT tudursmithcatrin thevalueofsourcedataverificationinacancerclinicaltrial AT stockendeborahd thevalueofsourcedataverificationinacancerclinicaltrial AT dunnjanet thevalueofsourcedataverificationinacancerclinicaltrial AT coxtrevor thevalueofsourcedataverificationinacancerclinicaltrial AT ghanehpaula thevalueofsourcedataverificationinacancerclinicaltrial AT cunninghamdavid thevalueofsourcedataverificationinacancerclinicaltrial AT neoptolemosjohnp thevalueofsourcedataverificationinacancerclinicaltrial AT tudursmithcatrin valueofsourcedataverificationinacancerclinicaltrial AT stockendeborahd valueofsourcedataverificationinacancerclinicaltrial AT dunnjanet valueofsourcedataverificationinacancerclinicaltrial AT coxtrevor valueofsourcedataverificationinacancerclinicaltrial AT ghanehpaula valueofsourcedataverificationinacancerclinicaltrial AT cunninghamdavid valueofsourcedataverificationinacancerclinicaltrial AT neoptolemosjohnp valueofsourcedataverificationinacancerclinicaltrial |