Cargando…
Check the box! How to deal with automation bias in AI-based personnel selection
Artificial Intelligence (AI) as decision support for personnel preselection, e.g., in the form of a dashboard, promises a more effective and fairer selection process. However, AI-based decision support systems might prompt decision makers to thoughtlessly accept the system’s recommendation. As this...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10113449/ https://www.ncbi.nlm.nih.gov/pubmed/37089740 http://dx.doi.org/10.3389/fpsyg.2023.1118723 |
_version_ | 1785027839586729984 |
---|---|
author | Kupfer, Cordula Prassl, Rita Fleiß, Jürgen Malin, Christine Thalmann, Stefan Kubicek, Bettina |
author_facet | Kupfer, Cordula Prassl, Rita Fleiß, Jürgen Malin, Christine Thalmann, Stefan Kubicek, Bettina |
author_sort | Kupfer, Cordula |
collection | PubMed |
description | Artificial Intelligence (AI) as decision support for personnel preselection, e.g., in the form of a dashboard, promises a more effective and fairer selection process. However, AI-based decision support systems might prompt decision makers to thoughtlessly accept the system’s recommendation. As this so-called automation bias contradicts ethical and legal requirements of human oversight for the use of AI-based recommendations in personnel preselection, the present study investigates strategies to reduce automation bias and increase decision quality. Based on the Elaboration Likelihood Model, we assume that instructing decision makers about the possibility of system errors and their responsibility for the decision, as well as providing an appropriate level of data aggregation should encourage decision makers to process information systematically instead of heuristically. We conducted a 3 (general information, information about system errors, information about responsibility) x 2 (low vs. high aggregated data) experiment to investigate which strategy can reduce automation bias and enhance decision quality. We found that less automation bias in terms of higher scores on verification intensity indicators correlated with higher objective decision quality, i.e., more suitable applicants selected. Decision makers who received information about system errors scored higher on verification intensity indicators and rated subjective decision quality higher, but decision makers who were informed about their responsibility, unexpectedly, did not. Regarding aggregation level of data, decision makers of the highly aggregated data group spent less time on the level of the dashboard where highly aggregated data were presented. Our results show that it is important to inform decision makers who interact with AI-based decision-support systems about potential system errors and provide them with less aggregated data to reduce automation bias and enhance decision quality. |
format | Online Article Text |
id | pubmed-10113449 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-101134492023-04-20 Check the box! How to deal with automation bias in AI-based personnel selection Kupfer, Cordula Prassl, Rita Fleiß, Jürgen Malin, Christine Thalmann, Stefan Kubicek, Bettina Front Psychol Psychology Artificial Intelligence (AI) as decision support for personnel preselection, e.g., in the form of a dashboard, promises a more effective and fairer selection process. However, AI-based decision support systems might prompt decision makers to thoughtlessly accept the system’s recommendation. As this so-called automation bias contradicts ethical and legal requirements of human oversight for the use of AI-based recommendations in personnel preselection, the present study investigates strategies to reduce automation bias and increase decision quality. Based on the Elaboration Likelihood Model, we assume that instructing decision makers about the possibility of system errors and their responsibility for the decision, as well as providing an appropriate level of data aggregation should encourage decision makers to process information systematically instead of heuristically. We conducted a 3 (general information, information about system errors, information about responsibility) x 2 (low vs. high aggregated data) experiment to investigate which strategy can reduce automation bias and enhance decision quality. We found that less automation bias in terms of higher scores on verification intensity indicators correlated with higher objective decision quality, i.e., more suitable applicants selected. Decision makers who received information about system errors scored higher on verification intensity indicators and rated subjective decision quality higher, but decision makers who were informed about their responsibility, unexpectedly, did not. Regarding aggregation level of data, decision makers of the highly aggregated data group spent less time on the level of the dashboard where highly aggregated data were presented. Our results show that it is important to inform decision makers who interact with AI-based decision-support systems about potential system errors and provide them with less aggregated data to reduce automation bias and enhance decision quality. Frontiers Media S.A. 2023-04-05 /pmc/articles/PMC10113449/ /pubmed/37089740 http://dx.doi.org/10.3389/fpsyg.2023.1118723 Text en Copyright © 2023 Kupfer, Prassl, Fleiß, Malin, Thalmann and Kubicek. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Kupfer, Cordula Prassl, Rita Fleiß, Jürgen Malin, Christine Thalmann, Stefan Kubicek, Bettina Check the box! How to deal with automation bias in AI-based personnel selection |
title | Check the box! How to deal with automation bias in AI-based personnel selection |
title_full | Check the box! How to deal with automation bias in AI-based personnel selection |
title_fullStr | Check the box! How to deal with automation bias in AI-based personnel selection |
title_full_unstemmed | Check the box! How to deal with automation bias in AI-based personnel selection |
title_short | Check the box! How to deal with automation bias in AI-based personnel selection |
title_sort | check the box! how to deal with automation bias in ai-based personnel selection |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10113449/ https://www.ncbi.nlm.nih.gov/pubmed/37089740 http://dx.doi.org/10.3389/fpsyg.2023.1118723 |
work_keys_str_mv | AT kupfercordula checktheboxhowtodealwithautomationbiasinaibasedpersonnelselection AT prasslrita checktheboxhowtodealwithautomationbiasinaibasedpersonnelselection AT fleißjurgen checktheboxhowtodealwithautomationbiasinaibasedpersonnelselection AT malinchristine checktheboxhowtodealwithautomationbiasinaibasedpersonnelselection AT thalmannstefan checktheboxhowtodealwithautomationbiasinaibasedpersonnelselection AT kubicekbettina checktheboxhowtodealwithautomationbiasinaibasedpersonnelselection |