Cargando…
Validity of Chatbot Use for Mental Health Assessment: Experimental Study
BACKGROUND: Mental disorders in adolescence and young adulthood are major public health concerns. Digital tools such as text-based conversational agents (ie, chatbots) are a promising technology for facilitating mental health assessment. However, the human-like interaction style of chatbots may indu...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9664331/ https://www.ncbi.nlm.nih.gov/pubmed/36315228 http://dx.doi.org/10.2196/28082 |
_version_ | 1784831078889947136 |
---|---|
author | Schick, Anita Feine, Jasper Morana, Stefan Maedche, Alexander Reininghaus, Ulrich |
author_facet | Schick, Anita Feine, Jasper Morana, Stefan Maedche, Alexander Reininghaus, Ulrich |
author_sort | Schick, Anita |
collection | PubMed |
description | BACKGROUND: Mental disorders in adolescence and young adulthood are major public health concerns. Digital tools such as text-based conversational agents (ie, chatbots) are a promising technology for facilitating mental health assessment. However, the human-like interaction style of chatbots may induce potential biases, such as socially desirable responding (SDR), and may require further effort to complete assessments. OBJECTIVE: This study aimed to investigate the convergent and discriminant validity of chatbots for mental health assessments, the effect of assessment mode on SDR, and the effort required by participants for assessments using chatbots compared with established modes. METHODS: In a counterbalanced within-subject design, we assessed 2 different constructs—psychological distress (Kessler Psychological Distress Scale and Brief Symptom Inventory-18) and problematic alcohol use (Alcohol Use Disorders Identification Test-3)—in 3 modes (chatbot, paper-and-pencil, and web-based), and examined convergent and discriminant validity. In addition, we investigated the effect of mode on SDR, controlling for perceived sensitivity of items and individuals’ tendency to respond in a socially desirable way, and we also assessed the perceived social presence of modes. Including a between-subject condition, we further investigated whether SDR is increased in chatbot assessments when applied in a self-report setting versus when human interaction may be expected. Finally, the effort (ie, complexity, difficulty, burden, and time) required to complete the assessments was investigated. RESULTS: A total of 146 young adults (mean age 24, SD 6.42 years; n=67, 45.9% female) were recruited from a research panel for laboratory experiments. The results revealed high positive correlations (all P<.001) of measures of the same construct across different modes, indicating the convergent validity of chatbot assessments. Furthermore, there were no correlations between the distinct constructs, indicating discriminant validity. Moreover, there were no differences in SDR between modes and whether human interaction was expected, although the perceived social presence of the chatbot mode was higher than that of the established modes (P<.001). Finally, greater effort (all P<.05) and more time were needed to complete chatbot assessments than for completing the established modes (P<.001). CONCLUSIONS: Our findings suggest that chatbots may yield valid results. Furthermore, an understanding of chatbot design trade-offs in terms of potential strengths (ie, increased social presence) and limitations (ie, increased effort) when assessing mental health were established. |
format | Online Article Text |
id | pubmed-9664331 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | JMIR Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-96643312022-11-15 Validity of Chatbot Use for Mental Health Assessment: Experimental Study Schick, Anita Feine, Jasper Morana, Stefan Maedche, Alexander Reininghaus, Ulrich JMIR Mhealth Uhealth Original Paper BACKGROUND: Mental disorders in adolescence and young adulthood are major public health concerns. Digital tools such as text-based conversational agents (ie, chatbots) are a promising technology for facilitating mental health assessment. However, the human-like interaction style of chatbots may induce potential biases, such as socially desirable responding (SDR), and may require further effort to complete assessments. OBJECTIVE: This study aimed to investigate the convergent and discriminant validity of chatbots for mental health assessments, the effect of assessment mode on SDR, and the effort required by participants for assessments using chatbots compared with established modes. METHODS: In a counterbalanced within-subject design, we assessed 2 different constructs—psychological distress (Kessler Psychological Distress Scale and Brief Symptom Inventory-18) and problematic alcohol use (Alcohol Use Disorders Identification Test-3)—in 3 modes (chatbot, paper-and-pencil, and web-based), and examined convergent and discriminant validity. In addition, we investigated the effect of mode on SDR, controlling for perceived sensitivity of items and individuals’ tendency to respond in a socially desirable way, and we also assessed the perceived social presence of modes. Including a between-subject condition, we further investigated whether SDR is increased in chatbot assessments when applied in a self-report setting versus when human interaction may be expected. Finally, the effort (ie, complexity, difficulty, burden, and time) required to complete the assessments was investigated. RESULTS: A total of 146 young adults (mean age 24, SD 6.42 years; n=67, 45.9% female) were recruited from a research panel for laboratory experiments. The results revealed high positive correlations (all P<.001) of measures of the same construct across different modes, indicating the convergent validity of chatbot assessments. Furthermore, there were no correlations between the distinct constructs, indicating discriminant validity. Moreover, there were no differences in SDR between modes and whether human interaction was expected, although the perceived social presence of the chatbot mode was higher than that of the established modes (P<.001). Finally, greater effort (all P<.05) and more time were needed to complete chatbot assessments than for completing the established modes (P<.001). CONCLUSIONS: Our findings suggest that chatbots may yield valid results. Furthermore, an understanding of chatbot design trade-offs in terms of potential strengths (ie, increased social presence) and limitations (ie, increased effort) when assessing mental health were established. JMIR Publications 2022-10-31 /pmc/articles/PMC9664331/ /pubmed/36315228 http://dx.doi.org/10.2196/28082 Text en ©Anita Schick, Jasper Feine, Stefan Morana, Alexander Maedche, Ulrich Reininghaus. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 31.10.2022. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included. |
spellingShingle | Original Paper Schick, Anita Feine, Jasper Morana, Stefan Maedche, Alexander Reininghaus, Ulrich Validity of Chatbot Use for Mental Health Assessment: Experimental Study |
title | Validity of Chatbot Use for Mental Health Assessment: Experimental Study |
title_full | Validity of Chatbot Use for Mental Health Assessment: Experimental Study |
title_fullStr | Validity of Chatbot Use for Mental Health Assessment: Experimental Study |
title_full_unstemmed | Validity of Chatbot Use for Mental Health Assessment: Experimental Study |
title_short | Validity of Chatbot Use for Mental Health Assessment: Experimental Study |
title_sort | validity of chatbot use for mental health assessment: experimental study |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9664331/ https://www.ncbi.nlm.nih.gov/pubmed/36315228 http://dx.doi.org/10.2196/28082 |
work_keys_str_mv | AT schickanita validityofchatbotuseformentalhealthassessmentexperimentalstudy AT feinejasper validityofchatbotuseformentalhealthassessmentexperimentalstudy AT moranastefan validityofchatbotuseformentalhealthassessmentexperimentalstudy AT maedchealexander validityofchatbotuseformentalhealthassessmentexperimentalstudy AT reininghausulrich validityofchatbotuseformentalhealthassessmentexperimentalstudy |