Cargando…

ChatGPT’s Skills in Statistical Analysis Using the Example of Allergology: Do We Have Reason for Concern?

Background: Content generated by artificial intelligence is sometimes not truthful. To date, there have been a number of medical studies related to the validity of ChatGPT’s responses; however, there is a lack of studies addressing various aspects of statistical analysis. The aim of this study was t...

Descripción completa

Detalles Bibliográficos
Autor principal: Ordak, Michal
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10530997/
https://www.ncbi.nlm.nih.gov/pubmed/37761751
http://dx.doi.org/10.3390/healthcare11182554
_version_ 1785111617588953088
author Ordak, Michal
author_facet Ordak, Michal
author_sort Ordak, Michal
collection PubMed
description Background: Content generated by artificial intelligence is sometimes not truthful. To date, there have been a number of medical studies related to the validity of ChatGPT’s responses; however, there is a lack of studies addressing various aspects of statistical analysis. The aim of this study was to assess the validity of the answers provided by ChatGPT in relation to statistical analysis, as well as to identify recommendations to be implemented in the future in connection with the results obtained. Methods: The study was divided into four parts and was based on the exemplary medical field of allergology. The first part consisted of asking ChatGPT 30 different questions related to statistical analysis. The next five questions included a request for ChatGPT to perform the relevant statistical analyses, and another five requested ChatGPT to indicate which statistical test should be applied to articles accepted for publication in Allergy. The final part of the survey involved asking ChatGPT the same statistical question three times. Results: Out of the 40 general questions asked that related to broad statistical analysis, ChatGPT did not fully answer half of them. Assumptions necessary for the application of specific statistical tests were not included. ChatGPT also gave completely divergent answers to one question about which test should be used. Conclusion: The answers provided by ChatGPT to various statistical questions may give rise to the use of inappropriate statistical tests and, consequently, the subsequent misinterpretation of the research results obtained. Questions asked in this regard need to be framed more precisely.
format Online
Article
Text
id pubmed-10530997
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105309972023-09-28 ChatGPT’s Skills in Statistical Analysis Using the Example of Allergology: Do We Have Reason for Concern? Ordak, Michal Healthcare (Basel) Article Background: Content generated by artificial intelligence is sometimes not truthful. To date, there have been a number of medical studies related to the validity of ChatGPT’s responses; however, there is a lack of studies addressing various aspects of statistical analysis. The aim of this study was to assess the validity of the answers provided by ChatGPT in relation to statistical analysis, as well as to identify recommendations to be implemented in the future in connection with the results obtained. Methods: The study was divided into four parts and was based on the exemplary medical field of allergology. The first part consisted of asking ChatGPT 30 different questions related to statistical analysis. The next five questions included a request for ChatGPT to perform the relevant statistical analyses, and another five requested ChatGPT to indicate which statistical test should be applied to articles accepted for publication in Allergy. The final part of the survey involved asking ChatGPT the same statistical question three times. Results: Out of the 40 general questions asked that related to broad statistical analysis, ChatGPT did not fully answer half of them. Assumptions necessary for the application of specific statistical tests were not included. ChatGPT also gave completely divergent answers to one question about which test should be used. Conclusion: The answers provided by ChatGPT to various statistical questions may give rise to the use of inappropriate statistical tests and, consequently, the subsequent misinterpretation of the research results obtained. Questions asked in this regard need to be framed more precisely. MDPI 2023-09-15 /pmc/articles/PMC10530997/ /pubmed/37761751 http://dx.doi.org/10.3390/healthcare11182554 Text en © 2023 by the author. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ordak, Michal
ChatGPT’s Skills in Statistical Analysis Using the Example of Allergology: Do We Have Reason for Concern?
title ChatGPT’s Skills in Statistical Analysis Using the Example of Allergology: Do We Have Reason for Concern?
title_full ChatGPT’s Skills in Statistical Analysis Using the Example of Allergology: Do We Have Reason for Concern?
title_fullStr ChatGPT’s Skills in Statistical Analysis Using the Example of Allergology: Do We Have Reason for Concern?
title_full_unstemmed ChatGPT’s Skills in Statistical Analysis Using the Example of Allergology: Do We Have Reason for Concern?
title_short ChatGPT’s Skills in Statistical Analysis Using the Example of Allergology: Do We Have Reason for Concern?
title_sort chatgpt’s skills in statistical analysis using the example of allergology: do we have reason for concern?
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10530997/
https://www.ncbi.nlm.nih.gov/pubmed/37761751
http://dx.doi.org/10.3390/healthcare11182554
work_keys_str_mv AT ordakmichal chatgptsskillsinstatisticalanalysisusingtheexampleofallergologydowehavereasonforconcern