Cargando…

Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study

BACKGROUND: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots...

Descripción completa

Detalles Bibliográficos
Autores principales: Nadarzynski, Tom, Miles, Oliver, Cowie, Aimee, Ridge, Damien
Formato: Online Artículo Texto
Lenguaje:English
Publicado: SAGE Publications 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6704417/
https://www.ncbi.nlm.nih.gov/pubmed/31467682
http://dx.doi.org/10.1177/2055207619871808
Descripción
Sumario:BACKGROUND: Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants’ willingness to engage with AI-led health chatbots. METHODS: The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor. RESULTS: Three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’ were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI(95%):0.13–0.78] and dislike for talking to computers OR = 0.77 [CI(95%):0.60–0.99] as well as positively correlated with perceived utility OR = 5.10 [CI(95%):3.08–8.43], positive attitude OR = 2.71 [CI(95%):1.77–4.16] and perceived trustworthiness OR = 1.92 [CI(95%):1.13–3.25]. CONCLUSION: Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients’ perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots.