Cargando…

Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies

BACKGROUND: In January 2003, STAndards for the Reporting of Diagnostic accuracy studies (STARD) were published in a number of journals, to improve the quality of reporting in diagnostic accuracy studies. We designed a study to investigate the inter-assessment reproducibility, and intra- and inter-ob...

Descripción completa

Detalles Bibliográficos
Autores principales: Smidt, Nynke, Rutjes, Anne WS, van der Windt, Daniëlle AWM, Ostelo, Raymond WJG, Bossuyt, Patrick M, Reitsma, Johannes B, Bouter, Lex M, de Vet, Henrica CW
Formato: Texto
Lenguaje:English
Publicado: BioMed Central 2006
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1522016/
https://www.ncbi.nlm.nih.gov/pubmed/16539705
http://dx.doi.org/10.1186/1471-2288-6-12
_version_ 1782128785533960192
author Smidt, Nynke
Rutjes, Anne WS
van der Windt, Daniëlle AWM
Ostelo, Raymond WJG
Bossuyt, Patrick M
Reitsma, Johannes B
Bouter, Lex M
de Vet, Henrica CW
author_facet Smidt, Nynke
Rutjes, Anne WS
van der Windt, Daniëlle AWM
Ostelo, Raymond WJG
Bossuyt, Patrick M
Reitsma, Johannes B
Bouter, Lex M
de Vet, Henrica CW
author_sort Smidt, Nynke
collection PubMed
description BACKGROUND: In January 2003, STAndards for the Reporting of Diagnostic accuracy studies (STARD) were published in a number of journals, to improve the quality of reporting in diagnostic accuracy studies. We designed a study to investigate the inter-assessment reproducibility, and intra- and inter-observer reproducibility of the items in the STARD statement. METHODS: Thirty-two diagnostic accuracy studies published in 2000 in medical journals with an impact factor of at least 4 were included. Two reviewers independently evaluated the quality of reporting of these studies using the 25 items of the STARD statement. A consensus evaluation was obtained by discussing and resolving disagreements between reviewers. Almost two years later, the same studies were evaluated by the same reviewers. For each item, percentages agreement and Cohen's kappa between first and second consensus assessments (inter-assessment) were calculated. Intraclass Correlation coefficients (ICC) were calculated to evaluate its reliability. RESULTS: The overall inter-assessment agreement for all items of the STARD statement was 85% (Cohen's kappa 0.70) and varied from 63% to 100% for individual items. The largest differences between the two assessments were found for the reporting of the rationale of the reference standard (kappa 0.37), number of included participants that underwent tests (kappa 0.28), distribution of the severity of the disease (kappa 0.23), a cross tabulation of the results of the index test by the results of the reference standard (kappa 0.33) and how indeterminate results, missing data and outliers were handled (kappa 0.25). Within and between reviewers, also large differences were observed for these items. The inter-assessment reliability of the STARD checklist was satisfactory (ICC = 0.79 [95% CI: 0.62 to 0.89]). CONCLUSION: Although the overall reproducibility of the quality of reporting on diagnostic accuracy studies using the STARD statement was found to be good, substantial disagreements were found for specific items. These disagreements were not so much caused by differences in interpretation of the items by the reviewers but rather by difficulties in assessing the reporting of these items due to lack of clarity within the articles. Including a flow diagram in all reports on diagnostic accuracy studies would be very helpful in reducing confusion between readers and among reviewers.
format Text
id pubmed-1522016
institution National Center for Biotechnology Information
language English
publishDate 2006
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-15220162006-07-26 Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies Smidt, Nynke Rutjes, Anne WS van der Windt, Daniëlle AWM Ostelo, Raymond WJG Bossuyt, Patrick M Reitsma, Johannes B Bouter, Lex M de Vet, Henrica CW BMC Med Res Methodol Research Article BACKGROUND: In January 2003, STAndards for the Reporting of Diagnostic accuracy studies (STARD) were published in a number of journals, to improve the quality of reporting in diagnostic accuracy studies. We designed a study to investigate the inter-assessment reproducibility, and intra- and inter-observer reproducibility of the items in the STARD statement. METHODS: Thirty-two diagnostic accuracy studies published in 2000 in medical journals with an impact factor of at least 4 were included. Two reviewers independently evaluated the quality of reporting of these studies using the 25 items of the STARD statement. A consensus evaluation was obtained by discussing and resolving disagreements between reviewers. Almost two years later, the same studies were evaluated by the same reviewers. For each item, percentages agreement and Cohen's kappa between first and second consensus assessments (inter-assessment) were calculated. Intraclass Correlation coefficients (ICC) were calculated to evaluate its reliability. RESULTS: The overall inter-assessment agreement for all items of the STARD statement was 85% (Cohen's kappa 0.70) and varied from 63% to 100% for individual items. The largest differences between the two assessments were found for the reporting of the rationale of the reference standard (kappa 0.37), number of included participants that underwent tests (kappa 0.28), distribution of the severity of the disease (kappa 0.23), a cross tabulation of the results of the index test by the results of the reference standard (kappa 0.33) and how indeterminate results, missing data and outliers were handled (kappa 0.25). Within and between reviewers, also large differences were observed for these items. The inter-assessment reliability of the STARD checklist was satisfactory (ICC = 0.79 [95% CI: 0.62 to 0.89]). CONCLUSION: Although the overall reproducibility of the quality of reporting on diagnostic accuracy studies using the STARD statement was found to be good, substantial disagreements were found for specific items. These disagreements were not so much caused by differences in interpretation of the items by the reviewers but rather by difficulties in assessing the reporting of these items due to lack of clarity within the articles. Including a flow diagram in all reports on diagnostic accuracy studies would be very helpful in reducing confusion between readers and among reviewers. BioMed Central 2006-03-15 /pmc/articles/PMC1522016/ /pubmed/16539705 http://dx.doi.org/10.1186/1471-2288-6-12 Text en Copyright © 2006 Smidt et al; licensee BioMed Central Ltd. http://creativecommons.org/licenses/by/2.0 This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( (http://creativecommons.org/licenses/by/2.0) ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Smidt, Nynke
Rutjes, Anne WS
van der Windt, Daniëlle AWM
Ostelo, Raymond WJG
Bossuyt, Patrick M
Reitsma, Johannes B
Bouter, Lex M
de Vet, Henrica CW
Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies
title Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies
title_full Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies
title_fullStr Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies
title_full_unstemmed Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies
title_short Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies
title_sort reproducibility of the stard checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1522016/
https://www.ncbi.nlm.nih.gov/pubmed/16539705
http://dx.doi.org/10.1186/1471-2288-6-12
work_keys_str_mv AT smidtnynke reproducibilityofthestardchecklistaninstrumenttoassessthequalityofreportingofdiagnosticaccuracystudies
AT rutjesannews reproducibilityofthestardchecklistaninstrumenttoassessthequalityofreportingofdiagnosticaccuracystudies
AT vanderwindtdanielleawm reproducibilityofthestardchecklistaninstrumenttoassessthequalityofreportingofdiagnosticaccuracystudies
AT osteloraymondwjg reproducibilityofthestardchecklistaninstrumenttoassessthequalityofreportingofdiagnosticaccuracystudies
AT bossuytpatrickm reproducibilityofthestardchecklistaninstrumenttoassessthequalityofreportingofdiagnosticaccuracystudies
AT reitsmajohannesb reproducibilityofthestardchecklistaninstrumenttoassessthequalityofreportingofdiagnosticaccuracystudies
AT bouterlexm reproducibilityofthestardchecklistaninstrumenttoassessthequalityofreportingofdiagnosticaccuracystudies
AT devethenricacw reproducibilityofthestardchecklistaninstrumenttoassessthequalityofreportingofdiagnosticaccuracystudies