Cargando…
Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study
BACKGROUND: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6704417/ https://www.ncbi.nlm.nih.gov/pubmed/31467682 http://dx.doi.org/10.1177/2055207619871808 |
_version_ | 1783445502669357056 |
---|---|
author | Nadarzynski, Tom Miles, Oliver Cowie, Aimee Ridge, Damien |
author_facet | Nadarzynski, Tom Miles, Oliver Cowie, Aimee Ridge, Damien |
author_sort | Nadarzynski, Tom |
collection | PubMed |
description | BACKGROUND: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants’ willingness to engage with AI-led health chatbots. METHODS: The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor. RESULTS: Three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’ were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI(95%):0.13–0.78] and dislike for talking to computers OR = 0.77 [CI(95%):0.60–0.99] as well as positively correlated with perceived utility OR = 5.10 [CI(95%):3.08–8.43], positive attitude OR = 2.71 [CI(95%):1.77–4.16] and perceived trustworthiness OR = 1.92 [CI(95%):1.13–3.25]. CONCLUSION: Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients’ perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots. |
format | Online Article Text |
id | pubmed-6704417 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | SAGE Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-67044172019-08-29 Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study Nadarzynski, Tom Miles, Oliver Cowie, Aimee Ridge, Damien Digit Health Original Research BACKGROUND: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants’ willingness to engage with AI-led health chatbots. METHODS: The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor. RESULTS: Three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’ were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI(95%):0.13–0.78] and dislike for talking to computers OR = 0.77 [CI(95%):0.60–0.99] as well as positively correlated with perceived utility OR = 5.10 [CI(95%):3.08–8.43], positive attitude OR = 2.71 [CI(95%):1.77–4.16] and perceived trustworthiness OR = 1.92 [CI(95%):1.13–3.25]. CONCLUSION: Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients’ perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots. SAGE Publications 2019-08-21 /pmc/articles/PMC6704417/ /pubmed/31467682 http://dx.doi.org/10.1177/2055207619871808 Text en © The Author(s) 2019 http://creativecommons.org/licenses/by-nc/4.0/ Creative Commons Non Commercial CC BY-NC: This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (http://www.creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). |
spellingShingle | Original Research Nadarzynski, Tom Miles, Oliver Cowie, Aimee Ridge, Damien Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study |
title | Acceptability of artificial intelligence (AI)-led chatbot services in
healthcare: A mixed-methods study |
title_full | Acceptability of artificial intelligence (AI)-led chatbot services in
healthcare: A mixed-methods study |
title_fullStr | Acceptability of artificial intelligence (AI)-led chatbot services in
healthcare: A mixed-methods study |
title_full_unstemmed | Acceptability of artificial intelligence (AI)-led chatbot services in
healthcare: A mixed-methods study |
title_short | Acceptability of artificial intelligence (AI)-led chatbot services in
healthcare: A mixed-methods study |
title_sort | acceptability of artificial intelligence (ai)-led chatbot services in
healthcare: a mixed-methods study |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6704417/ https://www.ncbi.nlm.nih.gov/pubmed/31467682 http://dx.doi.org/10.1177/2055207619871808 |
work_keys_str_mv | AT nadarzynskitom acceptabilityofartificialintelligenceailedchatbotservicesinhealthcareamixedmethodsstudy AT milesoliver acceptabilityofartificialintelligenceailedchatbotservicesinhealthcareamixedmethodsstudy AT cowieaimee acceptabilityofartificialintelligenceailedchatbotservicesinhealthcareamixedmethodsstudy AT ridgedamien acceptabilityofartificialintelligenceailedchatbotservicesinhealthcareamixedmethodsstudy |