Cargando…
Screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier
Objective: Existing strategies to identify relevant studies for systematic review may not perform equally well across research domains. We compare four approaches based on either human or automated screening of either title and abstract or full text, and report the training of a machine learning alg...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Portland Press Ltd.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9885807/ https://www.ncbi.nlm.nih.gov/pubmed/36630537 http://dx.doi.org/10.1042/CS20220594 |
_version_ | 1784880006391922688 |
---|---|
author | Wilson, Emma Cruz, Florenz Maclean, Duncan Ghanawi, Joly McCann, Sarah K. Brennan, Paul M. Liao, Jing Sena, Emily S. Macleod, Malcolm |
author_facet | Wilson, Emma Cruz, Florenz Maclean, Duncan Ghanawi, Joly McCann, Sarah K. Brennan, Paul M. Liao, Jing Sena, Emily S. Macleod, Malcolm |
author_sort | Wilson, Emma |
collection | PubMed |
description | Objective: Existing strategies to identify relevant studies for systematic review may not perform equally well across research domains. We compare four approaches based on either human or automated screening of either title and abstract or full text, and report the training of a machine learning algorithm to identify in vitro studies from bibliographic records. Methods: We used a systematic review of oxygen–glucose deprivation (OGD) in PC-12 cells to compare approaches. For human screening, two reviewers independently screened studies based on title and abstract or full text, with disagreements reconciled by a third. For automated screening, we applied text mining to either title and abstract or full text. We trained a machine learning algorithm with decisions from 2000 randomly selected PubMed Central records enriched with a dataset of known in vitro studies. Results: Full-text approaches performed best, with human (sensitivity: 0.990, specificity: 1.000 and precision: 0.994) outperforming text mining (sensitivity: 0.972, specificity: 0.980 and precision: 0.764). For title and abstract, text mining (sensitivity: 0.890, specificity: 0.995 and precision: 0.922) outperformed human screening (sensitivity: 0.862, specificity: 0.998 and precision: 0.975). At our target sensitivity of 95% the algorithm performed with specificity of 0.850 and precision of 0.700. Conclusion: In this in vitro systematic review, human screening based on title and abstract erroneously excluded 14% of relevant studies, perhaps because title and abstract provide an incomplete description of methods used. Our algorithm might be used as a first selection phase in in vitro systematic reviews to limit the extent of full text screening required. |
format | Online Article Text |
id | pubmed-9885807 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Portland Press Ltd. |
record_format | MEDLINE/PubMed |
spelling | pubmed-98858072023-02-08 Screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier Wilson, Emma Cruz, Florenz Maclean, Duncan Ghanawi, Joly McCann, Sarah K. Brennan, Paul M. Liao, Jing Sena, Emily S. Macleod, Malcolm Clin Sci (Lond) Translational Science Objective: Existing strategies to identify relevant studies for systematic review may not perform equally well across research domains. We compare four approaches based on either human or automated screening of either title and abstract or full text, and report the training of a machine learning algorithm to identify in vitro studies from bibliographic records. Methods: We used a systematic review of oxygen–glucose deprivation (OGD) in PC-12 cells to compare approaches. For human screening, two reviewers independently screened studies based on title and abstract or full text, with disagreements reconciled by a third. For automated screening, we applied text mining to either title and abstract or full text. We trained a machine learning algorithm with decisions from 2000 randomly selected PubMed Central records enriched with a dataset of known in vitro studies. Results: Full-text approaches performed best, with human (sensitivity: 0.990, specificity: 1.000 and precision: 0.994) outperforming text mining (sensitivity: 0.972, specificity: 0.980 and precision: 0.764). For title and abstract, text mining (sensitivity: 0.890, specificity: 0.995 and precision: 0.922) outperformed human screening (sensitivity: 0.862, specificity: 0.998 and precision: 0.975). At our target sensitivity of 95% the algorithm performed with specificity of 0.850 and precision of 0.700. Conclusion: In this in vitro systematic review, human screening based on title and abstract erroneously excluded 14% of relevant studies, perhaps because title and abstract provide an incomplete description of methods used. Our algorithm might be used as a first selection phase in in vitro systematic reviews to limit the extent of full text screening required. Portland Press Ltd. 2023-01 2023-01-27 /pmc/articles/PMC9885807/ /pubmed/36630537 http://dx.doi.org/10.1042/CS20220594 Text en © 2023 The Author(s). https://creativecommons.org/licenses/by/4.0/This is an open access article published by Portland Press Limited on behalf of the Biochemical Society and distributed under the Creative Commons Attribution License 4.0 (CC BY) (https://creativecommons.org/licenses/by/4.0/) . Open access for this article was enabled by the participation of The University of Edinburgh in an all-inclusive Read & Publish agreement with Portland Press and the Biochemical Society under a transformative agreement with JISC. |
spellingShingle | Translational Science Wilson, Emma Cruz, Florenz Maclean, Duncan Ghanawi, Joly McCann, Sarah K. Brennan, Paul M. Liao, Jing Sena, Emily S. Macleod, Malcolm Screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier |
title | Screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier |
title_full | Screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier |
title_fullStr | Screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier |
title_full_unstemmed | Screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier |
title_short | Screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier |
title_sort | screening for in vitro systematic reviews: a comparison of screening methods and training of a machine learning classifier |
topic | Translational Science |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9885807/ https://www.ncbi.nlm.nih.gov/pubmed/36630537 http://dx.doi.org/10.1042/CS20220594 |
work_keys_str_mv | AT wilsonemma screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier AT cruzflorenz screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier AT macleanduncan screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier AT ghanawijoly screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier AT mccannsarahk screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier AT brennanpaulm screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier AT liaojing screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier AT senaemilys screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier AT macleodmalcolm screeningforinvitrosystematicreviewsacomparisonofscreeningmethodsandtrainingofamachinelearningclassifier |