Cargando…
Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects
The complexity of the emotional presentation of users to Artificial Intelligence (AI) virtual assistants is mainly manifested in user motivation and social emotion, but the current research lacks an effective conversion path from emotion to acceptance. This paper innovatively cuts from the perspecti...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8414898/ https://www.ncbi.nlm.nih.gov/pubmed/34484086 http://dx.doi.org/10.3389/fpsyg.2021.728495 |
_version_ | 1783747870197809152 |
---|---|
author | Zhang, Shiying Meng, Zixuan Chen, Beibei Yang, Xiu Zhao, Xinran |
author_facet | Zhang, Shiying Meng, Zixuan Chen, Beibei Yang, Xiu Zhao, Xinran |
author_sort | Zhang, Shiying |
collection | PubMed |
description | The complexity of the emotional presentation of users to Artificial Intelligence (AI) virtual assistants is mainly manifested in user motivation and social emotion, but the current research lacks an effective conversion path from emotion to acceptance. This paper innovatively cuts from the perspective of trust, establishes an AI virtual assistant acceptance model, conducts an empirical study based on the survey data from 240 questionnaires, and uses multilevel regression analysis and the bootstrap method to analyze the data. The results showed that functionality and social emotions had a significant effect on trust, where perceived humanity showed an inverted U relationship on trust, and trust mediated the relationship between both functionality and social emotions and acceptance. The findings explain the emotional complexity of users toward AI virtual assistants and extend the transformation path of technology acceptance from the trust perspective, which has implications for the development and design of AI applications. |
format | Online Article Text |
id | pubmed-8414898 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-84148982021-09-04 Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects Zhang, Shiying Meng, Zixuan Chen, Beibei Yang, Xiu Zhao, Xinran Front Psychol Psychology The complexity of the emotional presentation of users to Artificial Intelligence (AI) virtual assistants is mainly manifested in user motivation and social emotion, but the current research lacks an effective conversion path from emotion to acceptance. This paper innovatively cuts from the perspective of trust, establishes an AI virtual assistant acceptance model, conducts an empirical study based on the survey data from 240 questionnaires, and uses multilevel regression analysis and the bootstrap method to analyze the data. The results showed that functionality and social emotions had a significant effect on trust, where perceived humanity showed an inverted U relationship on trust, and trust mediated the relationship between both functionality and social emotions and acceptance. The findings explain the emotional complexity of users toward AI virtual assistants and extend the transformation path of technology acceptance from the trust perspective, which has implications for the development and design of AI applications. Frontiers Media S.A. 2021-08-13 /pmc/articles/PMC8414898/ /pubmed/34484086 http://dx.doi.org/10.3389/fpsyg.2021.728495 Text en Copyright © 2021 Zhang, Meng, Chen, Yang and Zhao. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Zhang, Shiying Meng, Zixuan Chen, Beibei Yang, Xiu Zhao, Xinran Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects |
title | Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects |
title_full | Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects |
title_fullStr | Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects |
title_full_unstemmed | Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects |
title_short | Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants—Trust-Based Mediating Effects |
title_sort | motivation, social emotion, and the acceptance of artificial intelligence virtual assistants—trust-based mediating effects |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8414898/ https://www.ncbi.nlm.nih.gov/pubmed/34484086 http://dx.doi.org/10.3389/fpsyg.2021.728495 |
work_keys_str_mv | AT zhangshiying motivationsocialemotionandtheacceptanceofartificialintelligencevirtualassistantstrustbasedmediatingeffects AT mengzixuan motivationsocialemotionandtheacceptanceofartificialintelligencevirtualassistantstrustbasedmediatingeffects AT chenbeibei motivationsocialemotionandtheacceptanceofartificialintelligencevirtualassistantstrustbasedmediatingeffects AT yangxiu motivationsocialemotionandtheacceptanceofartificialintelligencevirtualassistantstrustbasedmediatingeffects AT zhaoxinran motivationsocialemotionandtheacceptanceofartificialintelligencevirtualassistantstrustbasedmediatingeffects |