Cargando…
Mind + Machine: ChatGPT as a Basic Clinical Decisions Support Tool
Background Generative artificial intelligence (AI) has integrated into various industries as it has demonstrated enormous potential in automating elaborate processes and enhancing complex decision-making. The ability of these chatbots to critically triage, diagnose, and manage complex medical condit...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cureus
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10505276/ https://www.ncbi.nlm.nih.gov/pubmed/37724211 http://dx.doi.org/10.7759/cureus.43690 |
_version_ | 1785106887271776256 |
---|---|
author | Ayoub, Marc Ballout, Ahmad A Zayek, Rosana A Ayoub, Noel F |
author_facet | Ayoub, Marc Ballout, Ahmad A Zayek, Rosana A Ayoub, Noel F |
author_sort | Ayoub, Marc |
collection | PubMed |
description | Background Generative artificial intelligence (AI) has integrated into various industries as it has demonstrated enormous potential in automating elaborate processes and enhancing complex decision-making. The ability of these chatbots to critically triage, diagnose, and manage complex medical conditions, remains unknown and requires further research. Objective This cross-sectional study sought to quantitatively analyze the appropriateness of ChatGPT (OpenAI, San Francisco, CA, US) in its ability to triage, synthesize differential diagnoses, and generate treatment plans for nine diverse but common clinical scenarios. Methods Various common clinical scenarios were developed. Each was input into ChatGPT, and the chatbot was asked to develop diagnostic and treatment plans. Five practicing physicians independently scored ChatGPT’s responses to the clinical scenarios. Results The average overall score for the triage ranking was 4.2 (SD 0.7). The lowest overall score was for the completeness of the differential diagnosis at 4.1 (0.5). The highest overall scores were seen with the accuracy of the differential diagnosis, initial treatment plan, and overall usefulness of the response (all with an average score of 4.4). Variance among physician scores ranged from 0.24 for accuracy of the differential diagnosis to 0.49 for appropriateness of triage ranking. Discussion ChatGPT has the potential to augment clinical decision-making. More extensive research, however, is needed to ensure accuracy and appropriate recommendations are provided. |
format | Online Article Text |
id | pubmed-10505276 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cureus |
record_format | MEDLINE/PubMed |
spelling | pubmed-105052762023-09-18 Mind + Machine: ChatGPT as a Basic Clinical Decisions Support Tool Ayoub, Marc Ballout, Ahmad A Zayek, Rosana A Ayoub, Noel F Cureus Emergency Medicine Background Generative artificial intelligence (AI) has integrated into various industries as it has demonstrated enormous potential in automating elaborate processes and enhancing complex decision-making. The ability of these chatbots to critically triage, diagnose, and manage complex medical conditions, remains unknown and requires further research. Objective This cross-sectional study sought to quantitatively analyze the appropriateness of ChatGPT (OpenAI, San Francisco, CA, US) in its ability to triage, synthesize differential diagnoses, and generate treatment plans for nine diverse but common clinical scenarios. Methods Various common clinical scenarios were developed. Each was input into ChatGPT, and the chatbot was asked to develop diagnostic and treatment plans. Five practicing physicians independently scored ChatGPT’s responses to the clinical scenarios. Results The average overall score for the triage ranking was 4.2 (SD 0.7). The lowest overall score was for the completeness of the differential diagnosis at 4.1 (0.5). The highest overall scores were seen with the accuracy of the differential diagnosis, initial treatment plan, and overall usefulness of the response (all with an average score of 4.4). Variance among physician scores ranged from 0.24 for accuracy of the differential diagnosis to 0.49 for appropriateness of triage ranking. Discussion ChatGPT has the potential to augment clinical decision-making. More extensive research, however, is needed to ensure accuracy and appropriate recommendations are provided. Cureus 2023-08-18 /pmc/articles/PMC10505276/ /pubmed/37724211 http://dx.doi.org/10.7759/cureus.43690 Text en Copyright © 2023, Ayoub et al. https://creativecommons.org/licenses/by/3.0/This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Emergency Medicine Ayoub, Marc Ballout, Ahmad A Zayek, Rosana A Ayoub, Noel F Mind + Machine: ChatGPT as a Basic Clinical Decisions Support Tool |
title | Mind + Machine: ChatGPT as a Basic Clinical Decisions Support Tool |
title_full | Mind + Machine: ChatGPT as a Basic Clinical Decisions Support Tool |
title_fullStr | Mind + Machine: ChatGPT as a Basic Clinical Decisions Support Tool |
title_full_unstemmed | Mind + Machine: ChatGPT as a Basic Clinical Decisions Support Tool |
title_short | Mind + Machine: ChatGPT as a Basic Clinical Decisions Support Tool |
title_sort | mind + machine: chatgpt as a basic clinical decisions support tool |
topic | Emergency Medicine |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10505276/ https://www.ncbi.nlm.nih.gov/pubmed/37724211 http://dx.doi.org/10.7759/cureus.43690 |
work_keys_str_mv | AT ayoubmarc mindmachinechatgptasabasicclinicaldecisionssupporttool AT balloutahmada mindmachinechatgptasabasicclinicaldecisionssupporttool AT zayekrosanaa mindmachinechatgptasabasicclinicaldecisionssupporttool AT ayoubnoelf mindmachinechatgptasabasicclinicaldecisionssupporttool |