Cargando…
Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing
BACKGROUND: Traditional clinical assessments are used extensively in neurology; however, they can be coarse, which can also make them insensitive to change. Kinarm is a robotic assessment system that has been used for precise assessment of individuals with neurological impairments. However, this pre...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7331240/ https://www.ncbi.nlm.nih.gov/pubmed/32615979 http://dx.doi.org/10.1186/s12984-020-00713-2 |
_version_ | 1783553283208511488 |
---|---|
author | Simmatis, Leif E. R. Early, Spencer Moore, Kimberly D. Appaqaq, Simone Scott, Stephen H. |
author_facet | Simmatis, Leif E. R. Early, Spencer Moore, Kimberly D. Appaqaq, Simone Scott, Stephen H. |
author_sort | Simmatis, Leif E. R. |
collection | PubMed |
description | BACKGROUND: Traditional clinical assessments are used extensively in neurology; however, they can be coarse, which can also make them insensitive to change. Kinarm is a robotic assessment system that has been used for precise assessment of individuals with neurological impairments. However, this precision also leads to the challenge of identifying whether a given change in performance reflects a significant change in an individual’s ability or is simply natural variation. Our objective here is to derive confidence intervals and thresholds of significant change for Kinarm Standard Tests™ (KST). METHODS: We assessed participants twice within 15 days on all tasks presently available in KST. We determined the 5–95% confidence intervals for each task parameter, and derived thresholds for significant change. We tested for learning effects and corrected for the false discovery rate (FDR) to identify task parameters with significant learning effects. Finally, we calculated intraclass correlation of type ICC [1, 2] (ICC-C) to quantify consistency across assessments. RESULTS: We recruited an average of 56 participants per task. Confidence intervals for Z-Task Scores ranged between 0.61 and 1.55, and the threshold for significant change ranged between 0.87 and 2.19. We determined that 4/11 tasks displayed learning effects that were significant after FDR correction; these 4 tasks primarily tested cognition or cognitive-motor integration. ICC-C values for Z-Task Scores ranged from 0.26 to 0.76. CONCLUSIONS: The present results provide statistical bounds on individual performance for KST as well as significant changes across repeated testing. Most measures of performance had good inter-rater reliability. Tasks with a higher cognitive burden seemed to be more susceptible to learning effects, which should be taken into account when interpreting longitudinal assessments of these tasks. |
format | Online Article Text |
id | pubmed-7331240 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-73312402020-07-06 Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing Simmatis, Leif E. R. Early, Spencer Moore, Kimberly D. Appaqaq, Simone Scott, Stephen H. J Neuroeng Rehabil Research BACKGROUND: Traditional clinical assessments are used extensively in neurology; however, they can be coarse, which can also make them insensitive to change. Kinarm is a robotic assessment system that has been used for precise assessment of individuals with neurological impairments. However, this precision also leads to the challenge of identifying whether a given change in performance reflects a significant change in an individual’s ability or is simply natural variation. Our objective here is to derive confidence intervals and thresholds of significant change for Kinarm Standard Tests™ (KST). METHODS: We assessed participants twice within 15 days on all tasks presently available in KST. We determined the 5–95% confidence intervals for each task parameter, and derived thresholds for significant change. We tested for learning effects and corrected for the false discovery rate (FDR) to identify task parameters with significant learning effects. Finally, we calculated intraclass correlation of type ICC [1, 2] (ICC-C) to quantify consistency across assessments. RESULTS: We recruited an average of 56 participants per task. Confidence intervals for Z-Task Scores ranged between 0.61 and 1.55, and the threshold for significant change ranged between 0.87 and 2.19. We determined that 4/11 tasks displayed learning effects that were significant after FDR correction; these 4 tasks primarily tested cognition or cognitive-motor integration. ICC-C values for Z-Task Scores ranged from 0.26 to 0.76. CONCLUSIONS: The present results provide statistical bounds on individual performance for KST as well as significant changes across repeated testing. Most measures of performance had good inter-rater reliability. Tasks with a higher cognitive burden seemed to be more susceptible to learning effects, which should be taken into account when interpreting longitudinal assessments of these tasks. BioMed Central 2020-07-02 /pmc/articles/PMC7331240/ /pubmed/32615979 http://dx.doi.org/10.1186/s12984-020-00713-2 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Simmatis, Leif E. R. Early, Spencer Moore, Kimberly D. Appaqaq, Simone Scott, Stephen H. Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing |
title | Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing |
title_full | Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing |
title_fullStr | Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing |
title_full_unstemmed | Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing |
title_short | Statistical measures of motor, sensory and cognitive performance across repeated robot-based testing |
title_sort | statistical measures of motor, sensory and cognitive performance across repeated robot-based testing |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7331240/ https://www.ncbi.nlm.nih.gov/pubmed/32615979 http://dx.doi.org/10.1186/s12984-020-00713-2 |
work_keys_str_mv | AT simmatisleifer statisticalmeasuresofmotorsensoryandcognitiveperformanceacrossrepeatedrobotbasedtesting AT earlyspencer statisticalmeasuresofmotorsensoryandcognitiveperformanceacrossrepeatedrobotbasedtesting AT moorekimberlyd statisticalmeasuresofmotorsensoryandcognitiveperformanceacrossrepeatedrobotbasedtesting AT appaqaqsimone statisticalmeasuresofmotorsensoryandcognitiveperformanceacrossrepeatedrobotbasedtesting AT scottstephenh statisticalmeasuresofmotorsensoryandcognitiveperformanceacrossrepeatedrobotbasedtesting |