Cargando…
Statistical practice and transparent reporting in the neurosciences: Preclinical motor behavioral experiments
Longitudinal and behavioral preclinical animal studies generate complex data, which may not be well matched to statistical approaches common in this literature. Analyses that do not adequately account for complexity may result in overly optimistic study conclusions, with consequences for reproducibi...
Autores principales: | , , , , , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8936466/ https://www.ncbi.nlm.nih.gov/pubmed/35312695 http://dx.doi.org/10.1371/journal.pone.0265154 |
Sumario: | Longitudinal and behavioral preclinical animal studies generate complex data, which may not be well matched to statistical approaches common in this literature. Analyses that do not adequately account for complexity may result in overly optimistic study conclusions, with consequences for reproducibility and translational decision-making. Recent work interrogating methodological shortcomings in animal research has not yet comprehensively investigated statistical shortcomings in the analysis of complex longitudinal and behavioral data. To this end, the current cross-sectional meta-research study rigorously reviewed published mouse or rat controlled experiments for motor rehabilitation in three neurologic conditions to evaluate statistical choices and reporting. Medline via PubMed was queried in February 2020 for English-language articles published January 1, 2017- December 31, 2019. Included were articles that used rat or mouse models of stroke, Parkinson’s disease, or traumatic brain injury, employed a therapeutic controlled experimental design to determine efficacy, and assessed at least one functional behavioral assessment or global evaluation of function. 241 articles from 99 journals were evaluated independently by a team of nine raters. Articles were assessed for statistical handling of non-independence, animal attrition, outliers, ordinal data, and multiplicity. Exploratory analyses evaluated whether transparency or statistical choices differed as a function of journal factors. A majority of articles failed to account for sources of non-independence in the data (74–93%) and/or did not analytically account for mid-treatment animal attrition (78%). Ordinal variables were often treated as continuous (37%), outliers were predominantly not mentioned (83%), and plots often concealed the distribution of the data (51%) Statistical choices and transparency did not differ with regards to journal rank or reporting requirements. Statistical misapplication can result in invalid experimental findings and inadequate reporting obscures errors. Clinician-scientists evaluating preclinical work for translational promise should be mindful of commonplace errors. Interventions are needed to improve statistical decision-making in preclinical behavioral neurosciences research. |
---|