Cargando…

Causal reasoning about epidemiological associations in conversational AI

We present a Socratic dialogue with ChatGPT, a large language model (LLM), on the causal interpretation of epidemiological associations between fine particulate matter (PM2.5) and human mortality risks. ChatGPT, reflecting probable patterns of human reasoning and argumentation in the sources on whic...

Descripción completa

Detalles Bibliográficos
Autor principal: Cox, Louis Anthony
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10445972/
https://www.ncbi.nlm.nih.gov/pubmed/37638368
http://dx.doi.org/10.1016/j.gloepi.2023.100102
Descripción
Sumario:We present a Socratic dialogue with ChatGPT, a large language model (LLM), on the causal interpretation of epidemiological associations between fine particulate matter (PM2.5) and human mortality risks. ChatGPT, reflecting probable patterns of human reasoning and argumentation in the sources on which it has been trained, initially holds that “It is well-established that exposure to ambient levels of PM2.5 does increase mortality risk” and adds the unsolicited remark that “Reducing exposure to PM2.5 is an important public health priority.” After patient questioning, however, it concludes that “It is not known with certainty that current ambient levels of PM2.5 increase mortality risk. While there is strong evidence of an association between PM2.5 and mortality risk, the causal nature of this association remains uncertain due to the possibility of omitted confounders.” This revised evaluation of the evidence suggests the potential value of sustained questioning in refining and improving both the types of human reasoning and argumentation imitated by current LLMs and the reliability of the initial conclusions expressed by current LLMs.