Cargando…
Adaptive trust calibration for human-AI collaboration
Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping pr...
Autores principales: | Okamura, Kazuo, Yamada, Seiji |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7034851/ https://www.ncbi.nlm.nih.gov/pubmed/32084201 http://dx.doi.org/10.1371/journal.pone.0229132 |
Ejemplares similares
-
A Quantum Model of Trust Calibration in Human–AI Interactions
por: Roeder, Luisa, et al.
Publicado: (2023) -
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
por: Tomsett, Richard, et al.
Publicado: (2020) -
An investigation on trust in AI-enabled collaboration: Application of AI-Driven chatbot in accommodation-based sharing economy
por: Cheng, Xusen, et al.
Publicado: (2022) -
Explanatory machine learning for justified trust in human-AI collaboration: Experiments on file deletion recommendations
por: Göbel, Kyra, et al.
Publicado: (2022) -
Trust does not need to be human: it is possible to trust medical AI
por: Ferrario, Andrea, et al.
Publicado: (2021)