Cargando…

Predicting and reasoning about replicability using structured groups

This paper explores judgements about the replicability of social and behavioural sciences research and what drives those judgements. Using a mixed methods approach, it draws on qualitative and quantitative data elicited from groups using a structured approach called the IDEA protocol (‘investigate’,...

Descripción completa

Detalles Bibliográficos
Autores principales: Wintle, Bonnie C., Smith, Eden T., Bush, Martin, Mody, Fallon, Wilkinson, David P., Hanea, Anca M., Marcoci, Alexandru, Fraser, Hannah, Hemming, Victoria, Thorn, Felix Singleton, McBride, Marissa F., Gould, Elliot, Head, Andrew, Hamilton, Daniel G., Kambouris, Steven, Rumpff, Libby, Hoekstra, Rink, Burgman, Mark A., Fidler, Fiona
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Royal Society 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10245209/
https://www.ncbi.nlm.nih.gov/pubmed/37293358
http://dx.doi.org/10.1098/rsos.221553
Descripción
Sumario:This paper explores judgements about the replicability of social and behavioural sciences research and what drives those judgements. Using a mixed methods approach, it draws on qualitative and quantitative data elicited from groups using a structured approach called the IDEA protocol (‘investigate’, ‘discuss’, ‘estimate’ and ‘aggregate’). Five groups of five people with relevant domain expertise evaluated 25 research claims that were subject to at least one replication study. Participants assessed the probability that each of the 25 research claims would replicate (i.e. that a replication study would find a statistically significant result in the same direction as the original study) and described the reasoning behind those judgements. We quantitatively analysed possible correlates of predictive accuracy, including self-rated expertise and updating of judgements after feedback and discussion. We qualitatively analysed the reasoning data to explore the cues, heuristics and patterns of reasoning used by participants. Participants achieved 84% classification accuracy in predicting replicability. Those who engaged in a greater breadth of reasoning provided more accurate replicability judgements. Some reasons were more commonly invoked by more accurate participants, such as ‘effect size’ and ‘reputation’ (e.g. of the field of research). There was also some evidence of a relationship between statistical literacy and accuracy.