Cargando…
A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement
Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researc...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10601652/ https://www.ncbi.nlm.nih.gov/pubmed/37899961 http://dx.doi.org/10.3389/frai.2023.1229805 |
_version_ | 1785126239686623232 |
---|---|
author | Sarkar, Surjodeep Gaur, Manas Chen, Lujie Karen Garg, Muskan Srivastava, Biplav |
author_facet | Sarkar, Surjodeep Gaur, Manas Chen, Lujie Karen Garg, Muskan Srivastava, Biplav |
author_sort | Sarkar, Surjodeep |
collection | PubMed |
description | Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations. |
format | Online Article Text |
id | pubmed-10601652 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-106016522023-10-27 A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement Sarkar, Surjodeep Gaur, Manas Chen, Lujie Karen Garg, Muskan Srivastava, Biplav Front Artif Intell Artificial Intelligence Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations. Frontiers Media S.A. 2023-10-12 /pmc/articles/PMC10601652/ /pubmed/37899961 http://dx.doi.org/10.3389/frai.2023.1229805 Text en Copyright © 2023 Sarkar, Gaur, Chen, Garg and Srivastava. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Artificial Intelligence Sarkar, Surjodeep Gaur, Manas Chen, Lujie Karen Garg, Muskan Srivastava, Biplav A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement |
title | A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement |
title_full | A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement |
title_fullStr | A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement |
title_full_unstemmed | A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement |
title_short | A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement |
title_sort | review of the explainability and safety of conversational agents for mental health to identify avenues for improvement |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10601652/ https://www.ncbi.nlm.nih.gov/pubmed/37899961 http://dx.doi.org/10.3389/frai.2023.1229805 |
work_keys_str_mv | AT sarkarsurjodeep areviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT gaurmanas areviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT chenlujiekaren areviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT gargmuskan areviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT srivastavabiplav areviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT sarkarsurjodeep reviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT gaurmanas reviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT chenlujiekaren reviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT gargmuskan reviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement AT srivastavabiplav reviewoftheexplainabilityandsafetyofconversationalagentsformentalhealthtoidentifyavenuesforimprovement |