Cargando…
Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment
ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be de...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10427505/ https://www.ncbi.nlm.nih.gov/pubmed/37593450 http://dx.doi.org/10.3389/fpsyt.2023.1213141 |
_version_ | 1785090256824958976 |
---|---|
author | Elyoseph, Zohar Levkovich, Inbar |
author_facet | Elyoseph, Zohar Levkovich, Inbar |
author_sort | Elyoseph, Zohar |
collection | PubMed |
description | ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT’s assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk. |
format | Online Article Text |
id | pubmed-10427505 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-104275052023-08-17 Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment Elyoseph, Zohar Levkovich, Inbar Front Psychiatry Psychiatry ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT’s assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk. Frontiers Media S.A. 2023-08-01 /pmc/articles/PMC10427505/ /pubmed/37593450 http://dx.doi.org/10.3389/fpsyt.2023.1213141 Text en Copyright © 2023 Elyoseph and Levkovich. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychiatry Elyoseph, Zohar Levkovich, Inbar Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment |
title | Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment |
title_full | Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment |
title_fullStr | Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment |
title_full_unstemmed | Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment |
title_short | Beyond human expertise: the promise and limitations of ChatGPT in suicide risk assessment |
title_sort | beyond human expertise: the promise and limitations of chatgpt in suicide risk assessment |
topic | Psychiatry |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10427505/ https://www.ncbi.nlm.nih.gov/pubmed/37593450 http://dx.doi.org/10.3389/fpsyt.2023.1213141 |
work_keys_str_mv | AT elyosephzohar beyondhumanexpertisethepromiseandlimitationsofchatgptinsuicideriskassessment AT levkovichinbar beyondhumanexpertisethepromiseandlimitationsofchatgptinsuicideriskassessment |