Cargando…

Adapting to the algorithm: how accuracy comparisons promote the use of a decision aid

In three experiments, we sought to understand when and why people use an algorithm decision aid. Distinct from recent approaches, we explicitly enumerate the algorithm’s accuracy while also providing summary feedback and training that allowed participants to assess their own skills. Our results high...

Descripción completa

Detalles Bibliográficos
Autores principales: Liang, Garston, Sloane, Jennifer F., Donkin, Christopher, Newell, Ben R.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8825899/
https://www.ncbi.nlm.nih.gov/pubmed/35133521
http://dx.doi.org/10.1186/s41235-022-00364-y
Descripción
Sumario:In three experiments, we sought to understand when and why people use an algorithm decision aid. Distinct from recent approaches, we explicitly enumerate the algorithm’s accuracy while also providing summary feedback and training that allowed participants to assess their own skills. Our results highlight that such direct performance comparisons between the algorithm and the individual encourages a strategy of selective reliance on the decision aid; individuals ignored the algorithm when the task was easier and relied on the algorithm when the task was harder. Our systematic investigation of summary feedback, training experience, and strategy hint manipulations shows that further opportunities to learn about the algorithm encourage not only increased reliance on the algorithm but also engagement in experimentation and verification of its recommendations. Together, our findings emphasize the decision-maker’s capacity to learn about the algorithm providing insights for how we can improve the use of decision aids. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s41235-022-00364-y.