Cargando…

Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies

OBJECTIVE: To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians. DESIGN: Systematic review. DATA SOURCES: Medline, Embase, Cochrane Central Re...

Descripción completa

Detalles Bibliográficos
Autores principales: Nagendran, Myura, Chen, Yang, Lovejoy, Christopher A, Gordon, Anthony C, Komorowski, Matthieu, Harvey, Hugh, Topol, Eric J, Ioannidis, John P A, Collins, Gary S, Maruthappu, Mahiben
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BMJ Publishing Group Ltd. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7190037/
https://www.ncbi.nlm.nih.gov/pubmed/32213531
http://dx.doi.org/10.1136/bmj.m689
_version_ 1783527610946420736
author Nagendran, Myura
Chen, Yang
Lovejoy, Christopher A
Gordon, Anthony C
Komorowski, Matthieu
Harvey, Hugh
Topol, Eric J
Ioannidis, John P A
Collins, Gary S
Maruthappu, Mahiben
author_facet Nagendran, Myura
Chen, Yang
Lovejoy, Christopher A
Gordon, Anthony C
Komorowski, Matthieu
Harvey, Hugh
Topol, Eric J
Ioannidis, John P A
Collins, Gary S
Maruthappu, Mahiben
author_sort Nagendran, Myura
collection PubMed
description OBJECTIVE: To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians. DESIGN: Systematic review. DATA SOURCES: Medline, Embase, Cochrane Central Register of Controlled Trials, and the World Health Organization trial registry from 2010 to June 2019. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: Randomised trial registrations and non-randomised studies comparing the performance of a deep learning algorithm in medical imaging with a contemporary group of one or more expert clinicians. Medical imaging has seen a growing interest in deep learning research. The main distinguishing feature of convolutional neural networks (CNNs) in deep learning is that when CNNs are fed with raw data, they develop their own representations needed for pattern recognition. The algorithm learns for itself the features of an image that are important for classification rather than being told by humans which features to use. The selected studies aimed to use medical imaging for predicting absolute risk of existing disease or classification into diagnostic groups (eg, disease or non-disease). For example, raw chest radiographs tagged with a label such as pneumothorax or no pneumothorax and the CNN learning which pixel patterns suggest pneumothorax. REVIEW METHODS: Adherence to reporting standards was assessed by using CONSORT (consolidated standards of reporting trials) for randomised studies and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) for non-randomised studies. Risk of bias was assessed by using the Cochrane risk of bias tool for randomised studies and PROBAST (prediction model risk of bias assessment tool) for non-randomised studies. RESULTS: Only 10 records were found for deep learning randomised clinical trials, two of which have been published (with low risk of bias, except for lack of blinding, and high adherence to reporting standards) and eight are ongoing. Of 81 non-randomised clinical trials identified, only nine were prospective and just six were tested in a real world clinical setting. The median number of experts in the comparator group was only four (interquartile range 2-9). Full access to all datasets and code was severely limited (unavailable in 95% and 93% of studies, respectively). The overall risk of bias was high in 58 of 81 studies and adherence to reporting standards was suboptimal (<50% adherence for 12 of 29 TRIPOD items). 61 of 81 studies stated in their abstract that performance of artificial intelligence was at least comparable to (or better than) that of clinicians. Only 31 of 81 studies (38%) stated that further prospective studies or trials were required. CONCLUSIONS: Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions. STUDY REGISTRATION: PROSPERO CRD42019123605.
format Online
Article
Text
id pubmed-7190037
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BMJ Publishing Group Ltd.
record_format MEDLINE/PubMed
spelling pubmed-71900372020-05-01 Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies Nagendran, Myura Chen, Yang Lovejoy, Christopher A Gordon, Anthony C Komorowski, Matthieu Harvey, Hugh Topol, Eric J Ioannidis, John P A Collins, Gary S Maruthappu, Mahiben BMJ Research OBJECTIVE: To systematically examine the design, reporting standards, risk of bias, and claims of studies comparing the performance of diagnostic deep learning algorithms for medical imaging with that of expert clinicians. DESIGN: Systematic review. DATA SOURCES: Medline, Embase, Cochrane Central Register of Controlled Trials, and the World Health Organization trial registry from 2010 to June 2019. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: Randomised trial registrations and non-randomised studies comparing the performance of a deep learning algorithm in medical imaging with a contemporary group of one or more expert clinicians. Medical imaging has seen a growing interest in deep learning research. The main distinguishing feature of convolutional neural networks (CNNs) in deep learning is that when CNNs are fed with raw data, they develop their own representations needed for pattern recognition. The algorithm learns for itself the features of an image that are important for classification rather than being told by humans which features to use. The selected studies aimed to use medical imaging for predicting absolute risk of existing disease or classification into diagnostic groups (eg, disease or non-disease). For example, raw chest radiographs tagged with a label such as pneumothorax or no pneumothorax and the CNN learning which pixel patterns suggest pneumothorax. REVIEW METHODS: Adherence to reporting standards was assessed by using CONSORT (consolidated standards of reporting trials) for randomised studies and TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) for non-randomised studies. Risk of bias was assessed by using the Cochrane risk of bias tool for randomised studies and PROBAST (prediction model risk of bias assessment tool) for non-randomised studies. RESULTS: Only 10 records were found for deep learning randomised clinical trials, two of which have been published (with low risk of bias, except for lack of blinding, and high adherence to reporting standards) and eight are ongoing. Of 81 non-randomised clinical trials identified, only nine were prospective and just six were tested in a real world clinical setting. The median number of experts in the comparator group was only four (interquartile range 2-9). Full access to all datasets and code was severely limited (unavailable in 95% and 93% of studies, respectively). The overall risk of bias was high in 58 of 81 studies and adherence to reporting standards was suboptimal (<50% adherence for 12 of 29 TRIPOD items). 61 of 81 studies stated in their abstract that performance of artificial intelligence was at least comparable to (or better than) that of clinicians. Only 31 of 81 studies (38%) stated that further prospective studies or trials were required. CONCLUSIONS: Few prospective deep learning studies and randomised trials exist in medical imaging. Most non-randomised trials are not prospective, are at high risk of bias, and deviate from existing reporting standards. Data and code availability are lacking in most studies, and human comparator groups are often small. Future studies should diminish risk of bias, enhance real world clinical relevance, improve reporting and transparency, and appropriately temper conclusions. STUDY REGISTRATION: PROSPERO CRD42019123605. BMJ Publishing Group Ltd. 2020-03-25 /pmc/articles/PMC7190037/ /pubmed/32213531 http://dx.doi.org/10.1136/bmj.m689 Text en © Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. http://creativecommons.org/licenses/by-nc/4.0/This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
spellingShingle Research
Nagendran, Myura
Chen, Yang
Lovejoy, Christopher A
Gordon, Anthony C
Komorowski, Matthieu
Harvey, Hugh
Topol, Eric J
Ioannidis, John P A
Collins, Gary S
Maruthappu, Mahiben
Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
title Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
title_full Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
title_fullStr Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
title_full_unstemmed Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
title_short Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
title_sort artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7190037/
https://www.ncbi.nlm.nih.gov/pubmed/32213531
http://dx.doi.org/10.1136/bmj.m689
work_keys_str_mv AT nagendranmyura artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT chenyang artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT lovejoychristophera artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT gordonanthonyc artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT komorowskimatthieu artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT harveyhugh artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT topolericj artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT ioannidisjohnpa artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT collinsgarys artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies
AT maruthappumahiben artificialintelligenceversusclinicianssystematicreviewofdesignreportingstandardsandclaimsofdeeplearningstudies