Cargando…
Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium
BACKGROUND: Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. OBJECTIVE: To evaluate...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889766/ https://www.ncbi.nlm.nih.gov/pubmed/29138991 http://dx.doi.org/10.1007/s40279-017-0813-0 |
_version_ | 1783312746961436672 |
---|---|
author | Broglio, Steven P. Katz, Barry P. Zhao, Shi McCrea, Michael McAllister, Thomas |
author_facet | Broglio, Steven P. Katz, Barry P. Zhao, Shi McCrea, Michael McAllister, Thomas |
author_sort | Broglio, Steven P. |
collection | PubMed |
description | BACKGROUND: Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. OBJECTIVE: To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. METHODS: Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen’s d). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. RESULTS: Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1–3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1–2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). CONCLUSIONS: This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1007/s40279-017-0813-0) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-5889766 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-58897662018-04-12 Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium Broglio, Steven P. Katz, Barry P. Zhao, Shi McCrea, Michael McAllister, Thomas Sports Med Original Research Article BACKGROUND: Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. OBJECTIVE: To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. METHODS: Participants (n = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen’s d). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. RESULTS: Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1–3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1–2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). CONCLUSIONS: This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1007/s40279-017-0813-0) contains supplementary material, which is available to authorized users. Springer International Publishing 2017-11-14 2018 /pmc/articles/PMC5889766/ /pubmed/29138991 http://dx.doi.org/10.1007/s40279-017-0813-0 Text en © The Author(s) 2018, corrected publication March 2018 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
spellingShingle | Original Research Article Broglio, Steven P. Katz, Barry P. Zhao, Shi McCrea, Michael McAllister, Thomas Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium |
title | Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium |
title_full | Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium |
title_fullStr | Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium |
title_full_unstemmed | Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium |
title_short | Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium |
title_sort | test-retest reliability and interpretation of common concussion assessment tools: findings from the ncaa-dod care consortium |
topic | Original Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889766/ https://www.ncbi.nlm.nih.gov/pubmed/29138991 http://dx.doi.org/10.1007/s40279-017-0813-0 |
work_keys_str_mv | AT brogliostevenp testretestreliabilityandinterpretationofcommonconcussionassessmenttoolsfindingsfromthencaadodcareconsortium AT katzbarryp testretestreliabilityandinterpretationofcommonconcussionassessmenttoolsfindingsfromthencaadodcareconsortium AT zhaoshi testretestreliabilityandinterpretationofcommonconcussionassessmenttoolsfindingsfromthencaadodcareconsortium AT mccreamichael testretestreliabilityandinterpretationofcommonconcussionassessmenttoolsfindingsfromthencaadodcareconsortium AT mcallisterthomas testretestreliabilityandinterpretationofcommonconcussionassessmenttoolsfindingsfromthencaadodcareconsortium AT testretestreliabilityandinterpretationofcommonconcussionassessmenttoolsfindingsfromthencaadodcareconsortium |