Cargando…

Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards

INTRODUCTION: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against huma...

Descripción completa

Detalles Bibliográficos
Autores principales: Roberts, Richard HR, Ali, Stephen R, Hutchings, Hayley A, Dobbs, Thomas D, Whitaker, Iain S
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BMJ Publishing Group 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10583079/
https://www.ncbi.nlm.nih.gov/pubmed/37827724
http://dx.doi.org/10.1136/bmjhci-2023-100830
_version_ 1785122478129938432
author Roberts, Richard HR
Ali, Stephen R
Hutchings, Hayley A
Dobbs, Thomas D
Whitaker, Iain S
author_facet Roberts, Richard HR
Ali, Stephen R
Hutchings, Hayley A
Dobbs, Thomas D
Whitaker, Iain S
author_sort Roberts, Richard HR
collection PubMed
description INTRODUCTION: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. METHODS: We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient. RESULTS: Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001). CONCLUSION: LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes.
format Online
Article
Text
id pubmed-10583079
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BMJ Publishing Group
record_format MEDLINE/PubMed
spelling pubmed-105830792023-10-19 Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards Roberts, Richard HR Ali, Stephen R Hutchings, Hayley A Dobbs, Thomas D Whitaker, Iain S BMJ Health Care Inform Short Report INTRODUCTION: Amid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis. METHODS: We compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient. RESULTS: Bland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001). CONCLUSION: LLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes. BMJ Publishing Group 2023-10-12 /pmc/articles/PMC10583079/ /pubmed/37827724 http://dx.doi.org/10.1136/bmjhci-2023-100830 Text en © Author(s) (or their employer(s)) 2023. Re-use permitted under CC BY. Published by BMJ. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
spellingShingle Short Report
Roberts, Richard HR
Ali, Stephen R
Hutchings, Hayley A
Dobbs, Thomas D
Whitaker, Iain S
Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
title Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
title_full Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
title_fullStr Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
title_full_unstemmed Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
title_short Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards
title_sort comparative study of chatgpt and human evaluators on the assessment of medical literature according to recognised reporting standards
topic Short Report
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10583079/
https://www.ncbi.nlm.nih.gov/pubmed/37827724
http://dx.doi.org/10.1136/bmjhci-2023-100830
work_keys_str_mv AT robertsrichardhr comparativestudyofchatgptandhumanevaluatorsontheassessmentofmedicalliteratureaccordingtorecognisedreportingstandards
AT alistephenr comparativestudyofchatgptandhumanevaluatorsontheassessmentofmedicalliteratureaccordingtorecognisedreportingstandards
AT hutchingshayleya comparativestudyofchatgptandhumanevaluatorsontheassessmentofmedicalliteratureaccordingtorecognisedreportingstandards
AT dobbsthomasd comparativestudyofchatgptandhumanevaluatorsontheassessmentofmedicalliteratureaccordingtorecognisedreportingstandards
AT whitakeriains comparativestudyofchatgptandhumanevaluatorsontheassessmentofmedicalliteratureaccordingtorecognisedreportingstandards