Cargando…
A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?
BACKGROUND: Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. DISCUSSION: We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We fo...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6582554/ https://www.ncbi.nlm.nih.gov/pubmed/31215463 http://dx.doi.org/10.1186/s13643-019-1062-0 |
_version_ | 1783428347533983744 |
---|---|
author | O’Connor, Annette M. Tsafnat, Guy Thomas, James Glasziou, Paul Gilbert, Stephen B. Hutton, Brian |
author_facet | O’Connor, Annette M. Tsafnat, Guy Thomas, James Glasziou, Paul Gilbert, Stephen B. Hutton, Brian |
author_sort | O’Connor, Annette M. |
collection | PubMed |
description | BACKGROUND: Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. DISCUSSION: We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see “others” in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments. CONCLUSION: We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process. |
format | Online Article Text |
id | pubmed-6582554 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-65825542019-06-26 A question of trust: can we build an evidence base to gain trust in systematic review automation technologies? O’Connor, Annette M. Tsafnat, Guy Thomas, James Glasziou, Paul Gilbert, Stephen B. Hutton, Brian Syst Rev Commentary BACKGROUND: Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. DISCUSSION: We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see “others” in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments. CONCLUSION: We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process. BioMed Central 2019-06-18 /pmc/articles/PMC6582554/ /pubmed/31215463 http://dx.doi.org/10.1186/s13643-019-1062-0 Text en © The Author(s). 2019 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Commentary O’Connor, Annette M. Tsafnat, Guy Thomas, James Glasziou, Paul Gilbert, Stephen B. Hutton, Brian A question of trust: can we build an evidence base to gain trust in systematic review automation technologies? |
title | A question of trust: can we build an evidence base to gain trust in systematic review automation technologies? |
title_full | A question of trust: can we build an evidence base to gain trust in systematic review automation technologies? |
title_fullStr | A question of trust: can we build an evidence base to gain trust in systematic review automation technologies? |
title_full_unstemmed | A question of trust: can we build an evidence base to gain trust in systematic review automation technologies? |
title_short | A question of trust: can we build an evidence base to gain trust in systematic review automation technologies? |
title_sort | question of trust: can we build an evidence base to gain trust in systematic review automation technologies? |
topic | Commentary |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6582554/ https://www.ncbi.nlm.nih.gov/pubmed/31215463 http://dx.doi.org/10.1186/s13643-019-1062-0 |
work_keys_str_mv | AT oconnorannettem aquestionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT tsafnatguy aquestionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT thomasjames aquestionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT glaszioupaul aquestionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT gilbertstephenb aquestionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT huttonbrian aquestionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT oconnorannettem questionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT tsafnatguy questionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT thomasjames questionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT glaszioupaul questionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT gilbertstephenb questionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies AT huttonbrian questionoftrustcanwebuildanevidencebasetogaintrustinsystematicreviewautomationtechnologies |