Cargando…
Automatic classification of experimental models in biomedical literature to support searching for alternative methods to animal experiments
Current animal protection laws require replacement of animal experiments with alternative methods, whenever such methods are suitable to reach the intended scientific objective. However, searching for alternative methods in the scientific literature is a time-consuming task that requires careful scr...
Autores principales: | , , , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10472567/ https://www.ncbi.nlm.nih.gov/pubmed/37658458 http://dx.doi.org/10.1186/s13326-023-00292-w |
Sumario: | Current animal protection laws require replacement of animal experiments with alternative methods, whenever such methods are suitable to reach the intended scientific objective. However, searching for alternative methods in the scientific literature is a time-consuming task that requires careful screening of an enormously large number of experimental biomedical publications. The identification of potentially relevant methods, e.g. organ or cell culture models, or computer simulations, can be supported with text mining tools specifically built for this purpose. Such tools are trained (or fine tuned) on relevant data sets labeled by human experts. We developed the GoldHamster corpus, composed of 1,600 PubMed (Medline) articles (titles and abstracts), in which we manually identified the used experimental model according to a set of eight labels, namely: “in vivo”, “organs”, “primary cells”, “immortal cell lines”, “invertebrates”, “humans”, “in silico” and “other” (models). We recruited 13 annotators with expertise in the biomedical domain and assigned each article to two individuals. Four additional rounds of annotation aimed at improving the quality of the annotations with disagreements in the first round. Furthermore, we conducted various machine learning experiments based on supervised learning to evaluate the corpus for our classification task. We obtained more than 7,000 document-level annotations for the above labels. After the first round of annotation, the inter-annotator agreement (kappa coefficient) varied among labels, and ranged from 0.42 (for “others”) to 0.82 (for “invertebrates”), with an overall score of 0.62. All disagreements were resolved in the subsequent rounds of annotation. The best-performing machine learning experiment used the PubMedBERT pre-trained model with fine-tuning to our corpus, which gained an overall f-score of 0.83. We obtained a corpus with high agreement for all labels, and our evaluation demonstrated that our corpus is suitable for training reliable predictive models for automatic classification of biomedical literature according to the used experimental models. Our SMAFIRA - “Smart feature-based interactive” - search tool (https://smafira.bf3r.de) will employ this classifier for supporting the retrieval of alternative methods to animal experiments. The corpus is available for download (https://doi.org/10.5281/zenodo.7152295), as well as the source code (https://github.com/mariananeves/goldhamster) and the model (https://huggingface.co/SMAFIRA/goldhamster). SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13326-023-00292-w. |
---|