Cargando…

Trends in the sample size, statistics, and contributions to the BrainMap database of activation likelihood estimation meta‐analyses: An empirical study of 10‐year data

The literature of neuroimaging meta‐analysis has been thriving for over a decade. A majority of them were coordinate‐based meta‐analyses, particularly the activation likelihood estimation (ALE) approach. A meta‐evaluation of these meta‐analyses was performed to qualitatively evaluate their design an...

Descripción completa

Detalles Bibliográficos
Autores principales: Yeung, Andy Wai Kan, Robertson, Michaela, Uecker, Angela, Fox, Peter T., Eickhoff, Simon B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley & Sons, Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9980884/
https://www.ncbi.nlm.nih.gov/pubmed/36479854
http://dx.doi.org/10.1002/hbm.26177
Descripción
Sumario:The literature of neuroimaging meta‐analysis has been thriving for over a decade. A majority of them were coordinate‐based meta‐analyses, particularly the activation likelihood estimation (ALE) approach. A meta‐evaluation of these meta‐analyses was performed to qualitatively evaluate their design and reporting standards. The publications listed from the BrainMap website were screened. Six hundred and three ALE papers published during 2010–2019 were included and analysed. For reporting standards, most of the ALE papers reported their total number of Papers involved and mentioned the inclusion/exclusion criteria on Paper selection. However, most papers did not describe how data redundancy was avoided when multiple related Experiments were reported within one paper. The most prevalent repeated‐measures correction methods were voxel‐level FDR (54.4%) and cluster‐level FWE (33.8%), with the latter quickly replacing the former since 2016. For study characteristics, sample size in terms of number of Papers included per ALE paper and number of Experiments per analysis seemed to be stable over the decade. One‐fifth of the surveyed ALE papers failed to meet the recommendation of having >17 Experiments per analysis. For data sharing, most of them did not provide input and output data. In conclusion, the field has matured well in terms of rising dominance of cluster‐level FWE correction, and slightly improved reporting on elimination of data redundancy and providing input data. The provision of Data and Code availability statements and flow chart of literature screening process, as well as data submission to BrainMap, should be more encouraged.