Cargando…

Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant

BACKGROUND: Conversational assistants, such as Siri, Alexa, and Google Assistant, are ubiquitous and are beginning to be used as portals for medical services. However, the potential safety issues of using conversational assistants for medical information by patients and consumers are not understood....

Descripción completa

Detalles Bibliográficos
Autores principales: Bickmore, Timothy W, Trinh, Ha, Olafsson, Stefan, O'Leary, Teresa K, Asadi, Reza, Rickles, Nathaniel M, Cruz, Ricardo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6231817/
https://www.ncbi.nlm.nih.gov/pubmed/30181110
http://dx.doi.org/10.2196/11510
_version_ 1783370305681489920
author Bickmore, Timothy W
Trinh, Ha
Olafsson, Stefan
O'Leary, Teresa K
Asadi, Reza
Rickles, Nathaniel M
Cruz, Ricardo
author_facet Bickmore, Timothy W
Trinh, Ha
Olafsson, Stefan
O'Leary, Teresa K
Asadi, Reza
Rickles, Nathaniel M
Cruz, Ricardo
author_sort Bickmore, Timothy W
collection PubMed
description BACKGROUND: Conversational assistants, such as Siri, Alexa, and Google Assistant, are ubiquitous and are beginning to be used as portals for medical services. However, the potential safety issues of using conversational assistants for medical information by patients and consumers are not understood. OBJECTIVE: To determine the prevalence and nature of the harm that could result from patients or consumers using conversational assistants for medical information. METHODS: Participants were given medical problems to pose to Siri, Alexa, or Google Assistant, and asked to determine an action to take based on information from the system. Assignment of tasks and systems were randomized across participants, and participants queried the conversational assistants in their own words, making as many attempts as needed until they either reported an action to take or gave up. Participant-reported actions for each medical task were rated for patient harm using an Agency for Healthcare Research and Quality harm scale. RESULTS: Fifty-four subjects completed the study with a mean age of 42 years (SD 18). Twenty-nine (54%) were female, 31 (57%) Caucasian, and 26 (50%) were college educated. Only 8 (15%) reported using a conversational assistant regularly, while 22 (41%) had never used one, and 24 (44%) had tried one “a few times.“ Forty-four (82%) used computers regularly. Subjects were only able to complete 168 (43%) of their 394 tasks. Of these, 49 (29%) reported actions that could have resulted in some degree of patient harm, including 27 (16%) that could have resulted in death. CONCLUSIONS: Reliance on conversational assistants for actionable medical information represents a safety risk for patients and consumers. Patients should be cautioned to not use these technologies for answers to medical questions they intend to act on without further consultation from a health care provider.
format Online
Article
Text
id pubmed-6231817
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-62318172018-12-03 Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant Bickmore, Timothy W Trinh, Ha Olafsson, Stefan O'Leary, Teresa K Asadi, Reza Rickles, Nathaniel M Cruz, Ricardo J Med Internet Res Original Paper BACKGROUND: Conversational assistants, such as Siri, Alexa, and Google Assistant, are ubiquitous and are beginning to be used as portals for medical services. However, the potential safety issues of using conversational assistants for medical information by patients and consumers are not understood. OBJECTIVE: To determine the prevalence and nature of the harm that could result from patients or consumers using conversational assistants for medical information. METHODS: Participants were given medical problems to pose to Siri, Alexa, or Google Assistant, and asked to determine an action to take based on information from the system. Assignment of tasks and systems were randomized across participants, and participants queried the conversational assistants in their own words, making as many attempts as needed until they either reported an action to take or gave up. Participant-reported actions for each medical task were rated for patient harm using an Agency for Healthcare Research and Quality harm scale. RESULTS: Fifty-four subjects completed the study with a mean age of 42 years (SD 18). Twenty-nine (54%) were female, 31 (57%) Caucasian, and 26 (50%) were college educated. Only 8 (15%) reported using a conversational assistant regularly, while 22 (41%) had never used one, and 24 (44%) had tried one “a few times.“ Forty-four (82%) used computers regularly. Subjects were only able to complete 168 (43%) of their 394 tasks. Of these, 49 (29%) reported actions that could have resulted in some degree of patient harm, including 27 (16%) that could have resulted in death. CONCLUSIONS: Reliance on conversational assistants for actionable medical information represents a safety risk for patients and consumers. Patients should be cautioned to not use these technologies for answers to medical questions they intend to act on without further consultation from a health care provider. JMIR Publications 2018-09-04 /pmc/articles/PMC6231817/ /pubmed/30181110 http://dx.doi.org/10.2196/11510 Text en ©Timothy W. Bickmore, Ha Trinh, Stefan Olafsson, Teresa K O'Leary, Reza Asadi, Nathaniel M Rickles, Ricardo Cruz. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.09.2018. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Bickmore, Timothy W
Trinh, Ha
Olafsson, Stefan
O'Leary, Teresa K
Asadi, Reza
Rickles, Nathaniel M
Cruz, Ricardo
Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant
title Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant
title_full Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant
title_fullStr Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant
title_full_unstemmed Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant
title_short Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant
title_sort patient and consumer safety risks when using conversational assistants for medical information: an observational study of siri, alexa, and google assistant
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6231817/
https://www.ncbi.nlm.nih.gov/pubmed/30181110
http://dx.doi.org/10.2196/11510
work_keys_str_mv AT bickmoretimothyw patientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationanobservationalstudyofsirialexaandgoogleassistant
AT trinhha patientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationanobservationalstudyofsirialexaandgoogleassistant
AT olafssonstefan patientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationanobservationalstudyofsirialexaandgoogleassistant
AT olearyteresak patientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationanobservationalstudyofsirialexaandgoogleassistant
AT asadireza patientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationanobservationalstudyofsirialexaandgoogleassistant
AT ricklesnathanielm patientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationanobservationalstudyofsirialexaandgoogleassistant
AT cruzricardo patientandconsumersafetyriskswhenusingconversationalassistantsformedicalinformationanobservationalstudyofsirialexaandgoogleassistant