Cargando…
ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders
Background Lifestyle-related diseases and disorders have become a significant global health burden. However, the majority of the population ignores or do not consult doctors for such disease or disorders. Artificial intelligence (AI)-based large language model (LLM) like ChatGPT (GPT3.5) is capable...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cureus
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10696911/ http://dx.doi.org/10.7759/cureus.48296 |
_version_ | 1785154673784651776 |
---|---|
author | Mondal, Himel Dash, Ipsita Mondal, Shaikat Behera, Joshil Kumar |
author_facet | Mondal, Himel Dash, Ipsita Mondal, Shaikat Behera, Joshil Kumar |
author_sort | Mondal, Himel |
collection | PubMed |
description | Background Lifestyle-related diseases and disorders have become a significant global health burden. However, the majority of the population ignores or do not consult doctors for such disease or disorders. Artificial intelligence (AI)-based large language model (LLM) like ChatGPT (GPT3.5) is capable of generating customized queries of a user. Hence, it can act as a virtual telehealth agent. Its capability to answer lifestyle-related diseases or disorders has not been explored. Objective This study aimed to evaluate the effectiveness of ChatGPT, an LLM, in providing answers to queries related to lifestyle-related diseases or disorders. Methods A set of 20 lifestyle-related disease or disorder cases covering a wide range of topics such as obesity, diabetes, cardiovascular health, and mental health were prepared with four questions. The case and questions were presented to ChatGPT and asked for the answers to those questions. Two physicians rated the content on a three-point Likert-like scale ranging from accurate (2), partially accurate (1), and inaccurate (0). Further, the content was rated as adequate (2), inadequate (1), and misguiding (0) for testing the applicability of the guides for patients. The readability of the text was analyzed by the Flesch-Kincaid Ease Score (FKES). Results Among 20 cases, the average score of accuracy was 1.83±0.37 and guidance was 1.9±0.21. Both the scores were higher than the hypothetical median of 1.5 (p=0.004 and p<0.0001, respectively). ChatGPT answered the questions with a natural tone in 11 cases and nine with a positive tone. The text was understandable for college graduates with a mean FKES of 27.8±5.74. Conclusion The analysis of content accuracy revealed that ChatGPT provided reasonably accurate information in the majority of the cases, successfully addressing queries related to lifestyle-related diseases or disorders. Hence, initial guidance can be obtained by patients when they get little time to consult a doctor or wait for an appointment to consult a doctor for suggestions about their condition. |
format | Online Article Text |
id | pubmed-10696911 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cureus |
record_format | MEDLINE/PubMed |
spelling | pubmed-106969112023-12-06 ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders Mondal, Himel Dash, Ipsita Mondal, Shaikat Behera, Joshil Kumar Cureus Family/General Practice Background Lifestyle-related diseases and disorders have become a significant global health burden. However, the majority of the population ignores or do not consult doctors for such disease or disorders. Artificial intelligence (AI)-based large language model (LLM) like ChatGPT (GPT3.5) is capable of generating customized queries of a user. Hence, it can act as a virtual telehealth agent. Its capability to answer lifestyle-related diseases or disorders has not been explored. Objective This study aimed to evaluate the effectiveness of ChatGPT, an LLM, in providing answers to queries related to lifestyle-related diseases or disorders. Methods A set of 20 lifestyle-related disease or disorder cases covering a wide range of topics such as obesity, diabetes, cardiovascular health, and mental health were prepared with four questions. The case and questions were presented to ChatGPT and asked for the answers to those questions. Two physicians rated the content on a three-point Likert-like scale ranging from accurate (2), partially accurate (1), and inaccurate (0). Further, the content was rated as adequate (2), inadequate (1), and misguiding (0) for testing the applicability of the guides for patients. The readability of the text was analyzed by the Flesch-Kincaid Ease Score (FKES). Results Among 20 cases, the average score of accuracy was 1.83±0.37 and guidance was 1.9±0.21. Both the scores were higher than the hypothetical median of 1.5 (p=0.004 and p<0.0001, respectively). ChatGPT answered the questions with a natural tone in 11 cases and nine with a positive tone. The text was understandable for college graduates with a mean FKES of 27.8±5.74. Conclusion The analysis of content accuracy revealed that ChatGPT provided reasonably accurate information in the majority of the cases, successfully addressing queries related to lifestyle-related diseases or disorders. Hence, initial guidance can be obtained by patients when they get little time to consult a doctor or wait for an appointment to consult a doctor for suggestions about their condition. Cureus 2023-11-05 /pmc/articles/PMC10696911/ http://dx.doi.org/10.7759/cureus.48296 Text en Copyright © 2023, Mondal et al. https://creativecommons.org/licenses/by/3.0/This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Family/General Practice Mondal, Himel Dash, Ipsita Mondal, Shaikat Behera, Joshil Kumar ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders |
title | ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders |
title_full | ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders |
title_fullStr | ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders |
title_full_unstemmed | ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders |
title_short | ChatGPT in Answering Queries Related to Lifestyle-Related Diseases and Disorders |
title_sort | chatgpt in answering queries related to lifestyle-related diseases and disorders |
topic | Family/General Practice |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10696911/ http://dx.doi.org/10.7759/cureus.48296 |
work_keys_str_mv | AT mondalhimel chatgptinansweringqueriesrelatedtolifestylerelateddiseasesanddisorders AT dashipsita chatgptinansweringqueriesrelatedtolifestylerelateddiseasesanddisorders AT mondalshaikat chatgptinansweringqueriesrelatedtolifestylerelateddiseasesanddisorders AT beherajoshilkumar chatgptinansweringqueriesrelatedtolifestylerelateddiseasesanddisorders |