Cargando…
Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples
Standard cognitive psychology research practices can introduce inadvertent sampling biases that reduce the reliability and generalizability of the findings. Researchers commonly acknowledge and understand that any given study sample is not perfectly generalizable, especially when implementing typica...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10590344/ https://www.ncbi.nlm.nih.gov/pubmed/37864737 http://dx.doi.org/10.1186/s41235-023-00520-y |
_version_ | 1785123970744320000 |
---|---|
author | Siritzky, Emma M. Cox, Patrick H. Nadler, Sydni M. Grady, Justin N. Kravitz, Dwight J. Mitroff, Stephen R. |
author_facet | Siritzky, Emma M. Cox, Patrick H. Nadler, Sydni M. Grady, Justin N. Kravitz, Dwight J. Mitroff, Stephen R. |
author_sort | Siritzky, Emma M. |
collection | PubMed |
description | Standard cognitive psychology research practices can introduce inadvertent sampling biases that reduce the reliability and generalizability of the findings. Researchers commonly acknowledge and understand that any given study sample is not perfectly generalizable, especially when implementing typical experimental constraints (e.g., limiting recruitment to specific age ranges or to individuals with normal color vision). However, less obvious systematic sampling constraints, referred to here as “shadow” biases, can be unintentionally introduced and can easily go unnoticed. For example, many standard cognitive psychology study designs involve lengthy and tedious experiments with simple, repetitive stimuli. Such testing environments may 1) be aversive to some would-be participants (e.g., those high in certain neurodivergent symptoms) who may self-select not to enroll in such studies, or 2) contribute to participant attrition, both of which reduce the sample’s representativeness. Likewise, standard performance-based data exclusion efforts (e.g., minimum accuracy or response time) or attention checks can systematically remove data from participants from subsets of the population (e.g., those low in conscientiousness). This commentary focuses on the theoretical and practical issues behind these non-obvious and often unacknowledged “shadow” biases, offers a simple illustration with real data as a proof of concept of how applying attention checks can systematically skew latent/hidden variables in the included population, and then discusses the broader implications with suggestions for how to manage and reduce, or at a minimum acknowledge, the problem. |
format | Online Article Text |
id | pubmed-10590344 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-105903442023-10-23 Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples Siritzky, Emma M. Cox, Patrick H. Nadler, Sydni M. Grady, Justin N. Kravitz, Dwight J. Mitroff, Stephen R. Cogn Res Princ Implic Original Article Standard cognitive psychology research practices can introduce inadvertent sampling biases that reduce the reliability and generalizability of the findings. Researchers commonly acknowledge and understand that any given study sample is not perfectly generalizable, especially when implementing typical experimental constraints (e.g., limiting recruitment to specific age ranges or to individuals with normal color vision). However, less obvious systematic sampling constraints, referred to here as “shadow” biases, can be unintentionally introduced and can easily go unnoticed. For example, many standard cognitive psychology study designs involve lengthy and tedious experiments with simple, repetitive stimuli. Such testing environments may 1) be aversive to some would-be participants (e.g., those high in certain neurodivergent symptoms) who may self-select not to enroll in such studies, or 2) contribute to participant attrition, both of which reduce the sample’s representativeness. Likewise, standard performance-based data exclusion efforts (e.g., minimum accuracy or response time) or attention checks can systematically remove data from participants from subsets of the population (e.g., those low in conscientiousness). This commentary focuses on the theoretical and practical issues behind these non-obvious and often unacknowledged “shadow” biases, offers a simple illustration with real data as a proof of concept of how applying attention checks can systematically skew latent/hidden variables in the included population, and then discusses the broader implications with suggestions for how to manage and reduce, or at a minimum acknowledge, the problem. Springer International Publishing 2023-10-21 /pmc/articles/PMC10590344/ /pubmed/37864737 http://dx.doi.org/10.1186/s41235-023-00520-y Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Article Siritzky, Emma M. Cox, Patrick H. Nadler, Sydni M. Grady, Justin N. Kravitz, Dwight J. Mitroff, Stephen R. Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples |
title | Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples |
title_full | Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples |
title_fullStr | Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples |
title_full_unstemmed | Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples |
title_short | Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples |
title_sort | standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic “shadow” biases in participant samples |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10590344/ https://www.ncbi.nlm.nih.gov/pubmed/37864737 http://dx.doi.org/10.1186/s41235-023-00520-y |
work_keys_str_mv | AT siritzkyemmam standardexperimentalparadigmdesignsanddataexclusionpracticesincognitivepsychologycaninadvertentlyintroducesystematicshadowbiasesinparticipantsamples AT coxpatrickh standardexperimentalparadigmdesignsanddataexclusionpracticesincognitivepsychologycaninadvertentlyintroducesystematicshadowbiasesinparticipantsamples AT nadlersydnim standardexperimentalparadigmdesignsanddataexclusionpracticesincognitivepsychologycaninadvertentlyintroducesystematicshadowbiasesinparticipantsamples AT gradyjustinn standardexperimentalparadigmdesignsanddataexclusionpracticesincognitivepsychologycaninadvertentlyintroducesystematicshadowbiasesinparticipantsamples AT kravitzdwightj standardexperimentalparadigmdesignsanddataexclusionpracticesincognitivepsychologycaninadvertentlyintroducesystematicshadowbiasesinparticipantsamples AT mitroffstephenr standardexperimentalparadigmdesignsanddataexclusionpracticesincognitivepsychologycaninadvertentlyintroducesystematicshadowbiasesinparticipantsamples |