Cargando…

Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI

The importance of integrating research findings is incontrovertible and procedures for coordinate-based meta-analysis (CBMA) such as Activation Likelihood Estimation (ALE) have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. As meta-analytical...

Descripción completa

Detalles Bibliográficos
Autores principales: Acar, Freya, Seurinck, Ruth, Eickhoff, Simon B., Moerkerke, Beatrijs
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6267999/
https://www.ncbi.nlm.nih.gov/pubmed/30500854
http://dx.doi.org/10.1371/journal.pone.0208177
_version_ 1783376190686363648
author Acar, Freya
Seurinck, Ruth
Eickhoff, Simon B.
Moerkerke, Beatrijs
author_facet Acar, Freya
Seurinck, Ruth
Eickhoff, Simon B.
Moerkerke, Beatrijs
author_sort Acar, Freya
collection PubMed
description The importance of integrating research findings is incontrovertible and procedures for coordinate-based meta-analysis (CBMA) such as Activation Likelihood Estimation (ALE) have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. As meta-analytical findings help building cumulative knowledge and guide future research, not only the quality of such analyses but also the way conclusions are drawn is extremely important. Like classical meta-analyses, coordinate-based meta-analyses can be subject to different forms of publication bias which may impact results and invalidate findings. The file drawer problem refers to the problem where studies fail to get published because they do not obtain anticipated results (e.g. due to lack of statistical significance). To enable assessing the stability of meta-analytical results and determine their robustness against the potential presence of the file drawer problem, we present an algorithm to determine the number of noise studies that can be added to an existing ALE fMRI meta-analysis before spatial convergence of reported activation peaks over studies in specific regions is no longer statistically significant. While methods to gain insight into the validity and limitations of results exist for other coordinate-based meta-analysis toolboxes, such as Galbraith plots for Multilevel Kernel Density Analysis (MKDA) and funnel plots and egger tests for seed-based d mapping, this procedure is the first to assess robustness against potential publication bias for the ALE algorithm. The method assists in interpreting meta-analytical results with the appropriate caution by looking how stable results remain in the presence of unreported information that may differ systematically from the information that is included. At the same time, the procedure provides further insight into the number of studies that drive the meta-analytical results. We illustrate the procedure through an example and test the effect of several parameters through extensive simulations. Code to generate noise studies is made freely available which enables users to easily use the algorithm when interpreting their results.
format Online
Article
Text
id pubmed-6267999
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-62679992018-12-19 Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI Acar, Freya Seurinck, Ruth Eickhoff, Simon B. Moerkerke, Beatrijs PLoS One Research Article The importance of integrating research findings is incontrovertible and procedures for coordinate-based meta-analysis (CBMA) such as Activation Likelihood Estimation (ALE) have become a popular approach to combine results of fMRI studies when only peaks of activation are reported. As meta-analytical findings help building cumulative knowledge and guide future research, not only the quality of such analyses but also the way conclusions are drawn is extremely important. Like classical meta-analyses, coordinate-based meta-analyses can be subject to different forms of publication bias which may impact results and invalidate findings. The file drawer problem refers to the problem where studies fail to get published because they do not obtain anticipated results (e.g. due to lack of statistical significance). To enable assessing the stability of meta-analytical results and determine their robustness against the potential presence of the file drawer problem, we present an algorithm to determine the number of noise studies that can be added to an existing ALE fMRI meta-analysis before spatial convergence of reported activation peaks over studies in specific regions is no longer statistically significant. While methods to gain insight into the validity and limitations of results exist for other coordinate-based meta-analysis toolboxes, such as Galbraith plots for Multilevel Kernel Density Analysis (MKDA) and funnel plots and egger tests for seed-based d mapping, this procedure is the first to assess robustness against potential publication bias for the ALE algorithm. The method assists in interpreting meta-analytical results with the appropriate caution by looking how stable results remain in the presence of unreported information that may differ systematically from the information that is included. At the same time, the procedure provides further insight into the number of studies that drive the meta-analytical results. We illustrate the procedure through an example and test the effect of several parameters through extensive simulations. Code to generate noise studies is made freely available which enables users to easily use the algorithm when interpreting their results. Public Library of Science 2018-11-30 /pmc/articles/PMC6267999/ /pubmed/30500854 http://dx.doi.org/10.1371/journal.pone.0208177 Text en © 2018 Acar et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Acar, Freya
Seurinck, Ruth
Eickhoff, Simon B.
Moerkerke, Beatrijs
Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI
title Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI
title_full Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI
title_fullStr Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI
title_full_unstemmed Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI
title_short Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI
title_sort assessing robustness against potential publication bias in activation likelihood estimation (ale) meta-analyses for fmri
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6267999/
https://www.ncbi.nlm.nih.gov/pubmed/30500854
http://dx.doi.org/10.1371/journal.pone.0208177
work_keys_str_mv AT acarfreya assessingrobustnessagainstpotentialpublicationbiasinactivationlikelihoodestimationalemetaanalysesforfmri
AT seurinckruth assessingrobustnessagainstpotentialpublicationbiasinactivationlikelihoodestimationalemetaanalysesforfmri
AT eickhoffsimonb assessingrobustnessagainstpotentialpublicationbiasinactivationlikelihoodestimationalemetaanalysesforfmri
AT moerkerkebeatrijs assessingrobustnessagainstpotentialpublicationbiasinactivationlikelihoodestimationalemetaanalysesforfmri