Cargando…

The Capability of ChatGPT in Predicting and Explaining Common Drug-Drug Interactions

Background Drug-drug interactions (DDIs) can have serious consequences for patient health and well-being. Patients who are taking multiple medications may be at an increased risk of experiencing adverse events or drug toxicity if they are not aware of potential interactions between their medications...

Descripción completa

Detalles Bibliográficos
Autores principales: Juhi, Ayesha, Pipil, Neha, Santra, Soumya, Mondal, Shaikat, Behera, Joshil Kumar, Mondal, Himel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cureus 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10105894/
https://www.ncbi.nlm.nih.gov/pubmed/37073184
http://dx.doi.org/10.7759/cureus.36272
Descripción
Sumario:Background Drug-drug interactions (DDIs) can have serious consequences for patient health and well-being. Patients who are taking multiple medications may be at an increased risk of experiencing adverse events or drug toxicity if they are not aware of potential interactions between their medications. Many times, patients self-prescribe medications without knowing DDI. Objective The objective is to investigate the effectiveness of ChatGPT, a large language model, in predicting and explaining common DDIs. Methods A total of 40 DDIs lists were prepared from previously published literature. This list was used to converse with ChatGPT with a two-stage question. The first question was asked as “can I take X and Y together?” with two drug names. After storing the output, the next question was asked. The second question was asked as “why should I not take X and Y together?” The output was stored for further analysis. The responses were checked by two pharmacologists and the consensus output was categorized as “correct” and “incorrect.” The “correct” ones were further classified as “conclusive” and “inconclusive.” The text was checked for reading ease scores and grades of education required to understand the text. Data were tested by descriptive and inferential statistics. Results Among the 40 DDI pairs, one answer was incorrect in the first question. Among correct answers, 19 were conclusive and 20 were inconclusive. For the second question, one answer was wrong. Among correct answers, 17 were conclusive and 22 were inconclusive. The mean Flesch reading ease score was 27.64±10.85 in answers to the first question and 29.35±10.16 in answers to the second question, p = 0.47. The mean Flesh-Kincaid grade level was 15.06±2.79 in answers to the first question and 14.85±1.97 in answers to the second question, p = 0.69. When we compared the reading levels with hypothetical 6th grade, the grades were significantly higher than expected (t = 20.57, p < 0.0001 for first answers and t = 28.43, p < 0.0001 for second answers). Conclusion ChatGPT is a partially effective tool for predicting and explaining DDIs. Patients, who may not have immediate access to the healthcare facility for getting information about DDIs, may take help from ChatGPT. However, on several occasions, it may provide incomplete guidance. Further improvement is required for potential usage by patients for getting ideas about DDI.