Cargando…
Health chatbots acceptability moderated by perceived stigma and severity: A cross-sectional survey
BACKGROUND: Chatbots and virtual voice assistants are increasingly common in primary care without sufficient evidence for their feasibility and effectiveness. We aimed to assess how perceived stigma and severity of various health issues are associated with the acceptability for three sources of heal...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8670785/ https://www.ncbi.nlm.nih.gov/pubmed/34917391 http://dx.doi.org/10.1177/20552076211063012 |
_version_ | 1784615032913395712 |
---|---|
author | Miles, Oliver West, Robert Nadarzynski, Tom |
author_facet | Miles, Oliver West, Robert Nadarzynski, Tom |
author_sort | Miles, Oliver |
collection | PubMed |
description | BACKGROUND: Chatbots and virtual voice assistants are increasingly common in primary care without sufficient evidence for their feasibility and effectiveness. We aimed to assess how perceived stigma and severity of various health issues are associated with the acceptability for three sources of health information and consultation: an automated chatbot, a General Practitioner (GP), or a combination of both. METHODS: Between May and June 2019, we conducted an online study, advertised via Facebook, for UK citizens. It was a factorial simulation experiment with three within-subject factors (perceived health issue stigma, severity, and consultation source) and six between-subject covariates. Acceptability rating for each consultation source was the dependant variable. A single mixed-model ANOVA was performed. RESULTS: Amongst 237 participants (65% aged over 45 years old, 73% women), GP consultations were seen as most acceptable, followed by GP-chatbot service. Chatbots were seen least acceptable as a consultation source for severe health issues, while the acceptability was significantly higher for stigmatised health issues. No associations between participants’ characteristics and acceptability were found. CONCLUSIONS: Although healthcare professionals are perceived as the most desired sources of health information, chatbots may be useful for sensitive health issues in which disclosure of personal information is challenging. However, chatbots are less acceptable for health issues of higher severity and should not be recommended for use within that context. Policymakers and digital service designers need to recognise the limitations of health chatbots. Future research should establish a set of health topics most suitable for chatbot-led interventions and primary healthcare services. |
format | Online Article Text |
id | pubmed-8670785 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | SAGE Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-86707852021-12-15 Health chatbots acceptability moderated by perceived stigma and severity: A cross-sectional survey Miles, Oliver West, Robert Nadarzynski, Tom Digit Health Original Research BACKGROUND: Chatbots and virtual voice assistants are increasingly common in primary care without sufficient evidence for their feasibility and effectiveness. We aimed to assess how perceived stigma and severity of various health issues are associated with the acceptability for three sources of health information and consultation: an automated chatbot, a General Practitioner (GP), or a combination of both. METHODS: Between May and June 2019, we conducted an online study, advertised via Facebook, for UK citizens. It was a factorial simulation experiment with three within-subject factors (perceived health issue stigma, severity, and consultation source) and six between-subject covariates. Acceptability rating for each consultation source was the dependant variable. A single mixed-model ANOVA was performed. RESULTS: Amongst 237 participants (65% aged over 45 years old, 73% women), GP consultations were seen as most acceptable, followed by GP-chatbot service. Chatbots were seen least acceptable as a consultation source for severe health issues, while the acceptability was significantly higher for stigmatised health issues. No associations between participants’ characteristics and acceptability were found. CONCLUSIONS: Although healthcare professionals are perceived as the most desired sources of health information, chatbots may be useful for sensitive health issues in which disclosure of personal information is challenging. However, chatbots are less acceptable for health issues of higher severity and should not be recommended for use within that context. Policymakers and digital service designers need to recognise the limitations of health chatbots. Future research should establish a set of health topics most suitable for chatbot-led interventions and primary healthcare services. SAGE Publications 2021-12-08 /pmc/articles/PMC8670785/ /pubmed/34917391 http://dx.doi.org/10.1177/20552076211063012 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). |
spellingShingle | Original Research Miles, Oliver West, Robert Nadarzynski, Tom Health chatbots acceptability moderated by perceived stigma and severity: A cross-sectional survey |
title | Health chatbots acceptability moderated by perceived stigma and
severity: A cross-sectional survey |
title_full | Health chatbots acceptability moderated by perceived stigma and
severity: A cross-sectional survey |
title_fullStr | Health chatbots acceptability moderated by perceived stigma and
severity: A cross-sectional survey |
title_full_unstemmed | Health chatbots acceptability moderated by perceived stigma and
severity: A cross-sectional survey |
title_short | Health chatbots acceptability moderated by perceived stigma and
severity: A cross-sectional survey |
title_sort | health chatbots acceptability moderated by perceived stigma and
severity: a cross-sectional survey |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8670785/ https://www.ncbi.nlm.nih.gov/pubmed/34917391 http://dx.doi.org/10.1177/20552076211063012 |
work_keys_str_mv | AT milesoliver healthchatbotsacceptabilitymoderatedbyperceivedstigmaandseverityacrosssectionalsurvey AT westrobert healthchatbotsacceptabilitymoderatedbyperceivedstigmaandseverityacrosssectionalsurvey AT nadarzynskitom healthchatbotsacceptabilitymoderatedbyperceivedstigmaandseverityacrosssectionalsurvey |