Cargando…
Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment
BACKGROUND: Prior studies have demonstrated the safety risks when patients and consumers use conversational assistants such as Apple’s Siri and Amazon’s Alexa for obtaining medical information. OBJECTIVE: The aim of this study is to evaluate two approaches to reducing the likelihood that patients or...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8663571/ https://www.ncbi.nlm.nih.gov/pubmed/34751661 http://dx.doi.org/10.2196/30704 |
_version_ | 1784613668626890752 |
---|---|
author | Bickmore, Timothy W Ólafsson, Stefán O'Leary, Teresa K |
author_facet | Bickmore, Timothy W Ólafsson, Stefán O'Leary, Teresa K |
author_sort | Bickmore, Timothy W |
collection | PubMed |
description | BACKGROUND: Prior studies have demonstrated the safety risks when patients and consumers use conversational assistants such as Apple’s Siri and Amazon’s Alexa for obtaining medical information. OBJECTIVE: The aim of this study is to evaluate two approaches to reducing the likelihood that patients or consumers will act on the potentially harmful medical information they receive from conversational assistants. METHODS: Participants were given medical problems to pose to conversational assistants that had been previously demonstrated to result in potentially harmful recommendations. Each conversational assistant’s response was randomly varied to include either a correct or incorrect paraphrase of the query or a disclaimer message—or not—telling the participants that they should not act on the advice without first talking to a physician. The participants were then asked what actions they would take based on their interaction, along with the likelihood of taking the action. The reported actions were recorded and analyzed, and the participants were interviewed at the end of each interaction. RESULTS: A total of 32 participants completed the study, each interacting with 4 conversational assistants. The participants were on average aged 42.44 (SD 14.08) years, 53% (17/32) were women, and 66% (21/32) were college educated. Those participants who heard a correct paraphrase of their query were significantly more likely to state that they would follow the medical advice provided by the conversational assistant (χ(2)(1)=3.1; P=.04). Those participants who heard a disclaimer message were significantly more likely to say that they would contact a physician or health professional before acting on the medical advice received (χ(2)(1)=43.5; P=.001). CONCLUSIONS: Designers of conversational systems should consider incorporating both disclaimers and feedback on query understanding in response to user queries for medical advice. Unconstrained natural language input should not be used in systems designed specifically to provide medical advice. |
format | Online Article Text |
id | pubmed-8663571 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | JMIR Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-86635712022-01-05 Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment Bickmore, Timothy W Ólafsson, Stefán O'Leary, Teresa K J Med Internet Res Original Paper BACKGROUND: Prior studies have demonstrated the safety risks when patients and consumers use conversational assistants such as Apple’s Siri and Amazon’s Alexa for obtaining medical information. OBJECTIVE: The aim of this study is to evaluate two approaches to reducing the likelihood that patients or consumers will act on the potentially harmful medical information they receive from conversational assistants. METHODS: Participants were given medical problems to pose to conversational assistants that had been previously demonstrated to result in potentially harmful recommendations. Each conversational assistant’s response was randomly varied to include either a correct or incorrect paraphrase of the query or a disclaimer message—or not—telling the participants that they should not act on the advice without first talking to a physician. The participants were then asked what actions they would take based on their interaction, along with the likelihood of taking the action. The reported actions were recorded and analyzed, and the participants were interviewed at the end of each interaction. RESULTS: A total of 32 participants completed the study, each interacting with 4 conversational assistants. The participants were on average aged 42.44 (SD 14.08) years, 53% (17/32) were women, and 66% (21/32) were college educated. Those participants who heard a correct paraphrase of their query were significantly more likely to state that they would follow the medical advice provided by the conversational assistant (χ(2)(1)=3.1; P=.04). Those participants who heard a disclaimer message were significantly more likely to say that they would contact a physician or health professional before acting on the medical advice received (χ(2)(1)=43.5; P=.001). CONCLUSIONS: Designers of conversational systems should consider incorporating both disclaimers and feedback on query understanding in response to user queries for medical advice. Unconstrained natural language input should not be used in systems designed specifically to provide medical advice. JMIR Publications 2021-11-09 /pmc/articles/PMC8663571/ /pubmed/34751661 http://dx.doi.org/10.2196/30704 Text en ©Timothy W Bickmore, Stefán Ólafsson, Teresa K O'Leary. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 09.11.2021. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included. |
spellingShingle | Original Paper Bickmore, Timothy W Ólafsson, Stefán O'Leary, Teresa K Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment |
title | Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment |
title_full | Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment |
title_fullStr | Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment |
title_full_unstemmed | Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment |
title_short | Mitigating Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: Exploratory Mixed Methods Experiment |
title_sort | mitigating patient and consumer safety risks when using conversational assistants for medical information: exploratory mixed methods experiment |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8663571/ https://www.ncbi.nlm.nih.gov/pubmed/34751661 http://dx.doi.org/10.2196/30704 |
work_keys_str_mv | AT bickmoretimothyw mitigatingpatientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationexploratorymixedmethodsexperiment AT olafssonstefan mitigatingpatientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationexploratorymixedmethodsexperiment AT olearyteresak mitigatingpatientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationexploratorymixedmethodsexperiment |