Cargando…

Assessing the practical differences between model selection methods in inferences about choice response time tasks

Evidence accumulations models (EAMs) have become the dominant modeling framework within rapid decision-making, using choice response time distributions to make inferences about the underlying decision process. These models are often applied to empirical data as “measurement tools”, with different th...

Descripción completa

Detalles Bibliográficos
Autor principal: Evans, Nathan J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6710222/
https://www.ncbi.nlm.nih.gov/pubmed/30783896
http://dx.doi.org/10.3758/s13423-018-01563-9
_version_ 1783446302156128256
author Evans, Nathan J.
author_facet Evans, Nathan J.
author_sort Evans, Nathan J.
collection PubMed
description Evidence accumulations models (EAMs) have become the dominant modeling framework within rapid decision-making, using choice response time distributions to make inferences about the underlying decision process. These models are often applied to empirical data as “measurement tools”, with different theoretical accounts being contrasted within the framework of the model. Some method is then needed to decide between these competing theoretical accounts, as only assessing the models on their ability to fit trends in the empirical data ignores model flexibility, and therefore, creates a bias towards more flexible models. However, there is no objectively optimal method to select between models, with methods varying in both their computational tractability and theoretical basis. I provide a systematic comparison between nine different model selection methods using a popular EAM—the linear ballistic accumulator (LBA; Brown & Heathcote, Cognitive Psychology 57(3), 153–178 2008)—in a large-scale simulation study and the empirical data of Dutilh et al. (Psychonomic Bulletin and Review, 1–19 2018). I find that the “predictive accuracy” class of methods (i.e., the Akaike Information Criterion [AIC], the Deviance Information Criterion [DIC], and the Widely Applicable Information Criterion [WAIC]) make different inferences to the “Bayes factor” class of methods (i.e., the Bayesian Information Criterion [BIC], and Bayes factors) in many, but not all, instances, and that the simpler methods (i.e., AIC and BIC) make inferences that are highly consistent with their more complex counterparts. These findings suggest that researchers should be able to use simpler “parameter counting” methods when applying the LBA and be confident in their inferences, but that researchers need to carefully consider and justify the general class of model selection method that they use, as different classes of methods often result in different inferences. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13423-018-01563-9) contains supplementary material, which is available to authorized users.
format Online
Article
Text
id pubmed-6710222
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-67102222019-09-06 Assessing the practical differences between model selection methods in inferences about choice response time tasks Evans, Nathan J. Psychon Bull Rev Theoretical Review Evidence accumulations models (EAMs) have become the dominant modeling framework within rapid decision-making, using choice response time distributions to make inferences about the underlying decision process. These models are often applied to empirical data as “measurement tools”, with different theoretical accounts being contrasted within the framework of the model. Some method is then needed to decide between these competing theoretical accounts, as only assessing the models on their ability to fit trends in the empirical data ignores model flexibility, and therefore, creates a bias towards more flexible models. However, there is no objectively optimal method to select between models, with methods varying in both their computational tractability and theoretical basis. I provide a systematic comparison between nine different model selection methods using a popular EAM—the linear ballistic accumulator (LBA; Brown & Heathcote, Cognitive Psychology 57(3), 153–178 2008)—in a large-scale simulation study and the empirical data of Dutilh et al. (Psychonomic Bulletin and Review, 1–19 2018). I find that the “predictive accuracy” class of methods (i.e., the Akaike Information Criterion [AIC], the Deviance Information Criterion [DIC], and the Widely Applicable Information Criterion [WAIC]) make different inferences to the “Bayes factor” class of methods (i.e., the Bayesian Information Criterion [BIC], and Bayes factors) in many, but not all, instances, and that the simpler methods (i.e., AIC and BIC) make inferences that are highly consistent with their more complex counterparts. These findings suggest that researchers should be able to use simpler “parameter counting” methods when applying the LBA and be confident in their inferences, but that researchers need to carefully consider and justify the general class of model selection method that they use, as different classes of methods often result in different inferences. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13423-018-01563-9) contains supplementary material, which is available to authorized users. Springer US 2019-02-19 2019 /pmc/articles/PMC6710222/ /pubmed/30783896 http://dx.doi.org/10.3758/s13423-018-01563-9 Text en © The Author(s) 2019 OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Theoretical Review
Evans, Nathan J.
Assessing the practical differences between model selection methods in inferences about choice response time tasks
title Assessing the practical differences between model selection methods in inferences about choice response time tasks
title_full Assessing the practical differences between model selection methods in inferences about choice response time tasks
title_fullStr Assessing the practical differences between model selection methods in inferences about choice response time tasks
title_full_unstemmed Assessing the practical differences between model selection methods in inferences about choice response time tasks
title_short Assessing the practical differences between model selection methods in inferences about choice response time tasks
title_sort assessing the practical differences between model selection methods in inferences about choice response time tasks
topic Theoretical Review
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6710222/
https://www.ncbi.nlm.nih.gov/pubmed/30783896
http://dx.doi.org/10.3758/s13423-018-01563-9
work_keys_str_mv AT evansnathanj assessingthepracticaldifferencesbetweenmodelselectionmethodsininferencesaboutchoiceresponsetimetasks