Cargando…
Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States
Advances in Artificial Intelligence (AI) are poised to transform society, national defense, and the economy by increasing efficiency, precision, and safety. Yet, widespread adoption within society depends on public trust and willingness to use AI-enabled technologies. In this study, we propose the p...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10353804/ https://www.ncbi.nlm.nih.gov/pubmed/37463148 http://dx.doi.org/10.1371/journal.pone.0288109 |
_version_ | 1785074782967955456 |
---|---|
author | Kreps, Sarah George, Julie Lushenko, Paul Rao, Adi |
author_facet | Kreps, Sarah George, Julie Lushenko, Paul Rao, Adi |
author_sort | Kreps, Sarah |
collection | PubMed |
description | Advances in Artificial Intelligence (AI) are poised to transform society, national defense, and the economy by increasing efficiency, precision, and safety. Yet, widespread adoption within society depends on public trust and willingness to use AI-enabled technologies. In this study, we propose the possibility of an AI “trust paradox,” in which individuals’ willingness to use AI-enabled technologies exceeds their level of trust in these capabilities. We conduct a two-part study to explore the trust paradox. First, we conduct a conjoint analysis, varying different attributes of AI-enabled technologies in different domains—including armed drones, general surgery, police surveillance, self-driving cars, and social media content moderation—to evaluate whether and under what conditions a trust paradox may exist. Second, we use causal mediation analysis in the context of a second survey experiment to help explain why individuals use AI-enabled technologies that they do not trust. We find strong support for the trust paradox, particularly in the area of AI-enabled police surveillance, where the levels of support for its use are both higher than other domains but also significantly exceed trust. We unpack these findings to show that several underlying beliefs help account for public attitudes of support, including the fear of missing out, optimism that future versions of the technology will be more trustworthy, a belief that the benefits of AI-enabled technologies outweigh the risks, and calculation that AI-enabled technologies yield efficiency gains. Our findings have important implications for the integration of AI-enabled technologies in multiple settings. |
format | Online Article Text |
id | pubmed-10353804 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-103538042023-07-19 Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States Kreps, Sarah George, Julie Lushenko, Paul Rao, Adi PLoS One Research Article Advances in Artificial Intelligence (AI) are poised to transform society, national defense, and the economy by increasing efficiency, precision, and safety. Yet, widespread adoption within society depends on public trust and willingness to use AI-enabled technologies. In this study, we propose the possibility of an AI “trust paradox,” in which individuals’ willingness to use AI-enabled technologies exceeds their level of trust in these capabilities. We conduct a two-part study to explore the trust paradox. First, we conduct a conjoint analysis, varying different attributes of AI-enabled technologies in different domains—including armed drones, general surgery, police surveillance, self-driving cars, and social media content moderation—to evaluate whether and under what conditions a trust paradox may exist. Second, we use causal mediation analysis in the context of a second survey experiment to help explain why individuals use AI-enabled technologies that they do not trust. We find strong support for the trust paradox, particularly in the area of AI-enabled police surveillance, where the levels of support for its use are both higher than other domains but also significantly exceed trust. We unpack these findings to show that several underlying beliefs help account for public attitudes of support, including the fear of missing out, optimism that future versions of the technology will be more trustworthy, a belief that the benefits of AI-enabled technologies outweigh the risks, and calculation that AI-enabled technologies yield efficiency gains. Our findings have important implications for the integration of AI-enabled technologies in multiple settings. Public Library of Science 2023-07-18 /pmc/articles/PMC10353804/ /pubmed/37463148 http://dx.doi.org/10.1371/journal.pone.0288109 Text en © 2023 Kreps et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Kreps, Sarah George, Julie Lushenko, Paul Rao, Adi Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States |
title | Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States |
title_full | Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States |
title_fullStr | Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States |
title_full_unstemmed | Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States |
title_short | Exploring the artificial intelligence “Trust paradox”: Evidence from a survey experiment in the United States |
title_sort | exploring the artificial intelligence “trust paradox”: evidence from a survey experiment in the united states |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10353804/ https://www.ncbi.nlm.nih.gov/pubmed/37463148 http://dx.doi.org/10.1371/journal.pone.0288109 |
work_keys_str_mv | AT krepssarah exploringtheartificialintelligencetrustparadoxevidencefromasurveyexperimentintheunitedstates AT georgejulie exploringtheartificialintelligencetrustparadoxevidencefromasurveyexperimentintheunitedstates AT lushenkopaul exploringtheartificialintelligencetrustparadoxevidencefromasurveyexperimentintheunitedstates AT raoadi exploringtheartificialintelligencetrustparadoxevidencefromasurveyexperimentintheunitedstates |