Cargando…

Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department

IMPORTANCE: Multimodal generative artificial intelligence (AI) methodologies have the potential to optimize emergency department care by producing draft radiology reports from input images. OBJECTIVE: To evaluate the accuracy and quality of AI–generated chest radiograph interpretations in the emerge...

Descripción completa

Detalles Bibliográficos
Autores principales: Huang, Jonathan, Neill, Luke, Wittbrodt, Matthew, Melnick, David, Klug, Matthew, Thompson, Michael, Bailitz, John, Loftus, Timothy, Malik, Sanjeev, Phull, Amit, Weston, Victoria, Heller, J. Alex, Etemadi, Mozziyar
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Medical Association 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10556963/
https://www.ncbi.nlm.nih.gov/pubmed/37796505
http://dx.doi.org/10.1001/jamanetworkopen.2023.36100
_version_ 1785116984270127104
author Huang, Jonathan
Neill, Luke
Wittbrodt, Matthew
Melnick, David
Klug, Matthew
Thompson, Michael
Bailitz, John
Loftus, Timothy
Malik, Sanjeev
Phull, Amit
Weston, Victoria
Heller, J. Alex
Etemadi, Mozziyar
author_facet Huang, Jonathan
Neill, Luke
Wittbrodt, Matthew
Melnick, David
Klug, Matthew
Thompson, Michael
Bailitz, John
Loftus, Timothy
Malik, Sanjeev
Phull, Amit
Weston, Victoria
Heller, J. Alex
Etemadi, Mozziyar
author_sort Huang, Jonathan
collection PubMed
description IMPORTANCE: Multimodal generative artificial intelligence (AI) methodologies have the potential to optimize emergency department care by producing draft radiology reports from input images. OBJECTIVE: To evaluate the accuracy and quality of AI–generated chest radiograph interpretations in the emergency department setting. DESIGN, SETTING, AND PARTICIPANTS: This was a retrospective diagnostic study of 500 randomly sampled emergency department encounters at a tertiary care institution including chest radiographs interpreted by both a teleradiology service and on-site attending radiologist from January 2022 to January 2023. An AI interpretation was generated for each radiograph. The 3 radiograph interpretations were each rated in duplicate by 6 emergency department physicians using a 5-point Likert scale. MAIN OUTCOMES AND MEASURES: The primary outcome was any difference in Likert scores between radiologist, AI, and teleradiology reports, using a cumulative link mixed model. Secondary analyses compared the probability of each report type containing no clinically significant discrepancy with further stratification by finding presence, using a logistic mixed-effects model. Physician comments on discrepancies were recorded. RESULTS: A total of 500 ED studies were included from 500 unique patients with a mean (SD) age of 53.3 (21.6) years; 282 patients (56.4%) were female. There was a significant association of report type with ratings, with post hoc tests revealing significantly greater scores for AI (mean [SE] score, 3.22 [0.34]; P < .001) and radiologist (mean [SE] score, 3.34 [0.34]; P < .001) reports compared with teleradiology (mean [SE] score, 2.74 [0.34]) reports. AI and radiologist reports were not significantly different. On secondary analysis, there was no difference in the probability of no clinically significant discrepancy between the 3 report types. Further stratification of reports by presence of cardiomegaly, pulmonary edema, pleural effusion, infiltrate, pneumothorax, and support devices also yielded no difference in the probability of containing no clinically significant discrepancy between the report types. CONCLUSIONS AND RELEVANCE: In a representative sample of emergency department chest radiographs, results suggest that the generative AI model produced reports of similar clinical accuracy and textual quality to radiologist reports while providing higher textual quality than teleradiologist reports. Implementation of the model in the clinical workflow could enable timely alerts to life-threatening pathology while aiding imaging interpretation and documentation.
format Online
Article
Text
id pubmed-10556963
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher American Medical Association
record_format MEDLINE/PubMed
spelling pubmed-105569632023-10-07 Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department Huang, Jonathan Neill, Luke Wittbrodt, Matthew Melnick, David Klug, Matthew Thompson, Michael Bailitz, John Loftus, Timothy Malik, Sanjeev Phull, Amit Weston, Victoria Heller, J. Alex Etemadi, Mozziyar JAMA Netw Open Original Investigation IMPORTANCE: Multimodal generative artificial intelligence (AI) methodologies have the potential to optimize emergency department care by producing draft radiology reports from input images. OBJECTIVE: To evaluate the accuracy and quality of AI–generated chest radiograph interpretations in the emergency department setting. DESIGN, SETTING, AND PARTICIPANTS: This was a retrospective diagnostic study of 500 randomly sampled emergency department encounters at a tertiary care institution including chest radiographs interpreted by both a teleradiology service and on-site attending radiologist from January 2022 to January 2023. An AI interpretation was generated for each radiograph. The 3 radiograph interpretations were each rated in duplicate by 6 emergency department physicians using a 5-point Likert scale. MAIN OUTCOMES AND MEASURES: The primary outcome was any difference in Likert scores between radiologist, AI, and teleradiology reports, using a cumulative link mixed model. Secondary analyses compared the probability of each report type containing no clinically significant discrepancy with further stratification by finding presence, using a logistic mixed-effects model. Physician comments on discrepancies were recorded. RESULTS: A total of 500 ED studies were included from 500 unique patients with a mean (SD) age of 53.3 (21.6) years; 282 patients (56.4%) were female. There was a significant association of report type with ratings, with post hoc tests revealing significantly greater scores for AI (mean [SE] score, 3.22 [0.34]; P < .001) and radiologist (mean [SE] score, 3.34 [0.34]; P < .001) reports compared with teleradiology (mean [SE] score, 2.74 [0.34]) reports. AI and radiologist reports were not significantly different. On secondary analysis, there was no difference in the probability of no clinically significant discrepancy between the 3 report types. Further stratification of reports by presence of cardiomegaly, pulmonary edema, pleural effusion, infiltrate, pneumothorax, and support devices also yielded no difference in the probability of containing no clinically significant discrepancy between the report types. CONCLUSIONS AND RELEVANCE: In a representative sample of emergency department chest radiographs, results suggest that the generative AI model produced reports of similar clinical accuracy and textual quality to radiologist reports while providing higher textual quality than teleradiologist reports. Implementation of the model in the clinical workflow could enable timely alerts to life-threatening pathology while aiding imaging interpretation and documentation. American Medical Association 2023-10-05 /pmc/articles/PMC10556963/ /pubmed/37796505 http://dx.doi.org/10.1001/jamanetworkopen.2023.36100 Text en Copyright 2023 Huang J et al. JAMA Network Open. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the CC-BY License.
spellingShingle Original Investigation
Huang, Jonathan
Neill, Luke
Wittbrodt, Matthew
Melnick, David
Klug, Matthew
Thompson, Michael
Bailitz, John
Loftus, Timothy
Malik, Sanjeev
Phull, Amit
Weston, Victoria
Heller, J. Alex
Etemadi, Mozziyar
Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department
title Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department
title_full Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department
title_fullStr Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department
title_full_unstemmed Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department
title_short Generative Artificial Intelligence for Chest Radiograph Interpretation in the Emergency Department
title_sort generative artificial intelligence for chest radiograph interpretation in the emergency department
topic Original Investigation
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10556963/
https://www.ncbi.nlm.nih.gov/pubmed/37796505
http://dx.doi.org/10.1001/jamanetworkopen.2023.36100
work_keys_str_mv AT huangjonathan generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT neillluke generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT wittbrodtmatthew generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT melnickdavid generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT klugmatthew generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT thompsonmichael generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT bailitzjohn generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT loftustimothy generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT maliksanjeev generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT phullamit generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT westonvictoria generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT hellerjalex generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment
AT etemadimozziyar generativeartificialintelligenceforchestradiographinterpretationintheemergencydepartment