Cargando…

Testing the Magnitude of Correlations Across Experimental Conditions

Correlation coefficients are often compared to investigate data across multiple research fields, as they allow investigators to determine different degrees of correlation to independent variables. Even with adequate sample size, such differences may be minor but still scientifically relevant. To dat...

Descripción completa

Detalles Bibliográficos
Autor principal: Di Plinio, Simone
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9177411/
https://www.ncbi.nlm.nih.gov/pubmed/35693490
http://dx.doi.org/10.3389/fpsyg.2022.860213
_version_ 1784722883743842304
author Di Plinio, Simone
author_facet Di Plinio, Simone
author_sort Di Plinio, Simone
collection PubMed
description Correlation coefficients are often compared to investigate data across multiple research fields, as they allow investigators to determine different degrees of correlation to independent variables. Even with adequate sample size, such differences may be minor but still scientifically relevant. To date, although much effort has gone into developing methods for estimating differences across correlation coefficients, adequate tools for variable sample sizes and correlational strengths have yet to be tested. The present study evaluated four different methods for detecting the difference between two correlations and tested the adequacy of each method using simulations with multiple data structures. The methods tested were Cohen’s q, Fisher’s method, linear mixed-effects models (LMEM), and an ad hoc developed procedure that integrates bootstrap and effect size estimation. Correlation strengths and sample size was varied across a wide range of simulations to test the power of the methods to reject the null hypothesis (i.e., the two correlations are equal). Results showed that Fisher’s method and the LMEM failed to reject the null hypothesis even in the presence of relevant differences between correlations and that Cohen’s method was not sensitive to the data structure. Bootstrap followed by effect size estimation resulted in a fair, unbiased compromise for estimating quantitative differences between statistical associations and producing outputs that could be easily compared across studies. This unbiased method is easily implementable in MatLab through the bootes function, which was made available online by the author at MathWorks.
format Online
Article
Text
id pubmed-9177411
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-91774112022-06-10 Testing the Magnitude of Correlations Across Experimental Conditions Di Plinio, Simone Front Psychol Psychology Correlation coefficients are often compared to investigate data across multiple research fields, as they allow investigators to determine different degrees of correlation to independent variables. Even with adequate sample size, such differences may be minor but still scientifically relevant. To date, although much effort has gone into developing methods for estimating differences across correlation coefficients, adequate tools for variable sample sizes and correlational strengths have yet to be tested. The present study evaluated four different methods for detecting the difference between two correlations and tested the adequacy of each method using simulations with multiple data structures. The methods tested were Cohen’s q, Fisher’s method, linear mixed-effects models (LMEM), and an ad hoc developed procedure that integrates bootstrap and effect size estimation. Correlation strengths and sample size was varied across a wide range of simulations to test the power of the methods to reject the null hypothesis (i.e., the two correlations are equal). Results showed that Fisher’s method and the LMEM failed to reject the null hypothesis even in the presence of relevant differences between correlations and that Cohen’s method was not sensitive to the data structure. Bootstrap followed by effect size estimation resulted in a fair, unbiased compromise for estimating quantitative differences between statistical associations and producing outputs that could be easily compared across studies. This unbiased method is easily implementable in MatLab through the bootes function, which was made available online by the author at MathWorks. Frontiers Media S.A. 2022-05-26 /pmc/articles/PMC9177411/ /pubmed/35693490 http://dx.doi.org/10.3389/fpsyg.2022.860213 Text en Copyright © 2022 Di Plinio. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Di Plinio, Simone
Testing the Magnitude of Correlations Across Experimental Conditions
title Testing the Magnitude of Correlations Across Experimental Conditions
title_full Testing the Magnitude of Correlations Across Experimental Conditions
title_fullStr Testing the Magnitude of Correlations Across Experimental Conditions
title_full_unstemmed Testing the Magnitude of Correlations Across Experimental Conditions
title_short Testing the Magnitude of Correlations Across Experimental Conditions
title_sort testing the magnitude of correlations across experimental conditions
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9177411/
https://www.ncbi.nlm.nih.gov/pubmed/35693490
http://dx.doi.org/10.3389/fpsyg.2022.860213
work_keys_str_mv AT dipliniosimone testingthemagnitudeofcorrelationsacrossexperimentalconditions