Cargando…
The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability
The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3767825/ https://www.ncbi.nlm.nih.gov/pubmed/24040139 http://dx.doi.org/10.1371/journal.pone.0073990 |
_version_ | 1782283712577142784 |
---|---|
author | Vaz, Sharmila Falkmer, Torbjörn Passmore, Anne Elizabeth Parsons, Richard Andreou, Pantelis |
author_facet | Vaz, Sharmila Falkmer, Torbjörn Passmore, Anne Elizabeth Parsons, Richard Andreou, Pantelis |
author_sort | Vaz, Sharmila |
collection | PubMed |
description | The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson’s (r) and the Intraclass Correlation Coefficient (ICC), while selecting tools to measure change and inferring change as true. The authors present statistical methods that are part of the current approach to evaluate test–retest reliability of assessment tools and outcome measurements. Selected examples from a previous test–retest study are used to elucidate the added advantages of knowledge of the ME of an assessment tool in clinical decision making. The CR is computed in the same units as the assessment tool and sets the boundary of the minimal detectable true change that can be measured by the tool. |
format | Online Article Text |
id | pubmed-3767825 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2013 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-37678252013-09-13 The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability Vaz, Sharmila Falkmer, Torbjörn Passmore, Anne Elizabeth Parsons, Richard Andreou, Pantelis PLoS One Research Article The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians to consider measurement error (ME) indices Coefficient of Repeatability (CR) or the Smallest Real Difference (SRD) over relative reliability coefficients like the Pearson’s (r) and the Intraclass Correlation Coefficient (ICC), while selecting tools to measure change and inferring change as true. The authors present statistical methods that are part of the current approach to evaluate test–retest reliability of assessment tools and outcome measurements. Selected examples from a previous test–retest study are used to elucidate the added advantages of knowledge of the ME of an assessment tool in clinical decision making. The CR is computed in the same units as the assessment tool and sets the boundary of the minimal detectable true change that can be measured by the tool. Public Library of Science 2013-09-09 /pmc/articles/PMC3767825/ /pubmed/24040139 http://dx.doi.org/10.1371/journal.pone.0073990 Text en © 2013 Vaz et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article Vaz, Sharmila Falkmer, Torbjörn Passmore, Anne Elizabeth Parsons, Richard Andreou, Pantelis The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability |
title | The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability |
title_full | The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability |
title_fullStr | The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability |
title_full_unstemmed | The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability |
title_short | The Case for Using the Repeatability Coefficient When Calculating Test–Retest Reliability |
title_sort | case for using the repeatability coefficient when calculating test–retest reliability |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3767825/ https://www.ncbi.nlm.nih.gov/pubmed/24040139 http://dx.doi.org/10.1371/journal.pone.0073990 |
work_keys_str_mv | AT vazsharmila thecaseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT falkmertorbjorn thecaseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT passmoreanneelizabeth thecaseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT parsonsrichard thecaseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT andreoupantelis thecaseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT vazsharmila caseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT falkmertorbjorn caseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT passmoreanneelizabeth caseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT parsonsrichard caseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability AT andreoupantelis caseforusingtherepeatabilitycoefficientwhencalculatingtestretestreliability |