Cargando…

Enhancing simulations with intra-subject variability for improved psychophysical assessments

Psychometric properties of perceptual assessments, like reliability, depend on stochastic properties of psychophysical sampling procedures resulting in method variability, as well as inter- and intra-subject variability. Method variability is commonly minimized by optimizing sampling procedures thro...

Descripción completa

Detalles Bibliográficos
Autores principales: Rinderknecht, Mike D., Lambercy, Olivier, Gassert, Roger
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312217/
https://www.ncbi.nlm.nih.gov/pubmed/30596761
http://dx.doi.org/10.1371/journal.pone.0209839
_version_ 1783383736124964864
author Rinderknecht, Mike D.
Lambercy, Olivier
Gassert, Roger
author_facet Rinderknecht, Mike D.
Lambercy, Olivier
Gassert, Roger
author_sort Rinderknecht, Mike D.
collection PubMed
description Psychometric properties of perceptual assessments, like reliability, depend on stochastic properties of psychophysical sampling procedures resulting in method variability, as well as inter- and intra-subject variability. Method variability is commonly minimized by optimizing sampling procedures through computer simulations. Inter-subject variability is inherent to the population of interest and cannot be influenced. Intra-subject variability introduced by confounds (e.g., inattention or lack of motivation) cannot be simply quantified from experimental data, as these data also include method variability. Therefore, this aspect is generally neglected when developing assessments. Yet, comparing method variability and intra-subject variability could give insights on whether effort should be invested in optimizing the sampling procedure, or in addressing potential confounds instead. We propose a new approach to estimate intra-subject variability of psychometric functions by combining computer simulations and behavioral data, and to account for it when simulating experiments. The approach was illustrated in a real-world scenario of proprioceptive difference threshold assessments. The behavioral study revealed a test-retest reliability of r = 0.212. Computer simulations without considering intra-subject variability predicted a reliability of r = 0.768, whereas the new approach including an intra-subject variability model lead to a realistic estimate of reliability (r = 0.207). Such a model also allows computing the theoretically maximally attainable reliability (r = 0.552) assuming an ideal sampling procedure. Comparing the reliability estimates when exclusively accounting for method variability versus intra-subject variability reveals that intra-subject variability should be reduced by addressing confounds and that only optimizing the sampling procedure may be insufficient to achieve a high reliability. This new approach allows computing the intra-subject variability with only two measurements per subject, and predicting the reliability for a larger number of subjects and retests based on simulations, without requiring additional experiments. Such a tool of predictive value is especially valuable for target populations where time is scarce, e.g., for assessments in clinical settings.
format Online
Article
Text
id pubmed-6312217
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-63122172019-01-08 Enhancing simulations with intra-subject variability for improved psychophysical assessments Rinderknecht, Mike D. Lambercy, Olivier Gassert, Roger PLoS One Research Article Psychometric properties of perceptual assessments, like reliability, depend on stochastic properties of psychophysical sampling procedures resulting in method variability, as well as inter- and intra-subject variability. Method variability is commonly minimized by optimizing sampling procedures through computer simulations. Inter-subject variability is inherent to the population of interest and cannot be influenced. Intra-subject variability introduced by confounds (e.g., inattention or lack of motivation) cannot be simply quantified from experimental data, as these data also include method variability. Therefore, this aspect is generally neglected when developing assessments. Yet, comparing method variability and intra-subject variability could give insights on whether effort should be invested in optimizing the sampling procedure, or in addressing potential confounds instead. We propose a new approach to estimate intra-subject variability of psychometric functions by combining computer simulations and behavioral data, and to account for it when simulating experiments. The approach was illustrated in a real-world scenario of proprioceptive difference threshold assessments. The behavioral study revealed a test-retest reliability of r = 0.212. Computer simulations without considering intra-subject variability predicted a reliability of r = 0.768, whereas the new approach including an intra-subject variability model lead to a realistic estimate of reliability (r = 0.207). Such a model also allows computing the theoretically maximally attainable reliability (r = 0.552) assuming an ideal sampling procedure. Comparing the reliability estimates when exclusively accounting for method variability versus intra-subject variability reveals that intra-subject variability should be reduced by addressing confounds and that only optimizing the sampling procedure may be insufficient to achieve a high reliability. This new approach allows computing the intra-subject variability with only two measurements per subject, and predicting the reliability for a larger number of subjects and retests based on simulations, without requiring additional experiments. Such a tool of predictive value is especially valuable for target populations where time is scarce, e.g., for assessments in clinical settings. Public Library of Science 2018-12-31 /pmc/articles/PMC6312217/ /pubmed/30596761 http://dx.doi.org/10.1371/journal.pone.0209839 Text en © 2018 Rinderknecht et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Rinderknecht, Mike D.
Lambercy, Olivier
Gassert, Roger
Enhancing simulations with intra-subject variability for improved psychophysical assessments
title Enhancing simulations with intra-subject variability for improved psychophysical assessments
title_full Enhancing simulations with intra-subject variability for improved psychophysical assessments
title_fullStr Enhancing simulations with intra-subject variability for improved psychophysical assessments
title_full_unstemmed Enhancing simulations with intra-subject variability for improved psychophysical assessments
title_short Enhancing simulations with intra-subject variability for improved psychophysical assessments
title_sort enhancing simulations with intra-subject variability for improved psychophysical assessments
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6312217/
https://www.ncbi.nlm.nih.gov/pubmed/30596761
http://dx.doi.org/10.1371/journal.pone.0209839
work_keys_str_mv AT rinderknechtmiked enhancingsimulationswithintrasubjectvariabilityforimprovedpsychophysicalassessments
AT lambercyolivier enhancingsimulationswithintrasubjectvariabilityforimprovedpsychophysicalassessments
AT gassertroger enhancingsimulationswithintrasubjectvariabilityforimprovedpsychophysicalassessments