Cargando…

Artificial fairness? Trust in algorithmic police decision-making

OBJECTIVES: Test whether (1) people view a policing decision made by an algorithm as more or less trustworthy than when an officer makes the same decision; (2) people who are presented with a specific instance of algorithmic policing have greater or lesser support for the general use of algorithmic...

Descripción completa

Detalles Bibliográficos
Autores principales: Hobson, Zoë, Yesberg, Julia A., Bradford, Ben, Jackson, Jonathan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Netherlands 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8435155/
https://www.ncbi.nlm.nih.gov/pubmed/34539294
http://dx.doi.org/10.1007/s11292-021-09484-9
_version_ 1783751734083977216
author Hobson, Zoë
Yesberg, Julia A.
Bradford, Ben
Jackson, Jonathan
author_facet Hobson, Zoë
Yesberg, Julia A.
Bradford, Ben
Jackson, Jonathan
author_sort Hobson, Zoë
collection PubMed
description OBJECTIVES: Test whether (1) people view a policing decision made by an algorithm as more or less trustworthy than when an officer makes the same decision; (2) people who are presented with a specific instance of algorithmic policing have greater or lesser support for the general use of algorithmic policing in general; and (3) people use trust as a heuristic through which to make sense of an unfamiliar technology like algorithmic policing. METHODS: An online experiment tested whether different decision-making methods, outcomes and scenario types affect judgements about the appropriateness and fairness of decision-making and the general acceptability of police use of this particular technology. RESULTS: People see a decision as less fair and less appropriate when an algorithm decides, compared to when an officer decides. Yet, perceptions of fairness and appropriateness were strong predictors of support for police use of algorithms, and being exposed to a successful use of an algorithm was linked, via trust in the decision made, to greater support for police use of algorithms. CONCLUSIONS: Making decisions solely based on algorithms might damage trust, and the more police rely solely on algorithmic decision-making, the less trusting people may be in decisions. However, mere exposure to the successful use of algorithms seems to enhance the general acceptability of this technology. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11292-021-09484-9.
format Online
Article
Text
id pubmed-8435155
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer Netherlands
record_format MEDLINE/PubMed
spelling pubmed-84351552021-09-13 Artificial fairness? Trust in algorithmic police decision-making Hobson, Zoë Yesberg, Julia A. Bradford, Ben Jackson, Jonathan J Exp Criminol Research Article OBJECTIVES: Test whether (1) people view a policing decision made by an algorithm as more or less trustworthy than when an officer makes the same decision; (2) people who are presented with a specific instance of algorithmic policing have greater or lesser support for the general use of algorithmic policing in general; and (3) people use trust as a heuristic through which to make sense of an unfamiliar technology like algorithmic policing. METHODS: An online experiment tested whether different decision-making methods, outcomes and scenario types affect judgements about the appropriateness and fairness of decision-making and the general acceptability of police use of this particular technology. RESULTS: People see a decision as less fair and less appropriate when an algorithm decides, compared to when an officer decides. Yet, perceptions of fairness and appropriateness were strong predictors of support for police use of algorithms, and being exposed to a successful use of an algorithm was linked, via trust in the decision made, to greater support for police use of algorithms. CONCLUSIONS: Making decisions solely based on algorithms might damage trust, and the more police rely solely on algorithmic decision-making, the less trusting people may be in decisions. However, mere exposure to the successful use of algorithms seems to enhance the general acceptability of this technology. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11292-021-09484-9. Springer Netherlands 2021-09-12 2023 /pmc/articles/PMC8435155/ /pubmed/34539294 http://dx.doi.org/10.1007/s11292-021-09484-9 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Research Article
Hobson, Zoë
Yesberg, Julia A.
Bradford, Ben
Jackson, Jonathan
Artificial fairness? Trust in algorithmic police decision-making
title Artificial fairness? Trust in algorithmic police decision-making
title_full Artificial fairness? Trust in algorithmic police decision-making
title_fullStr Artificial fairness? Trust in algorithmic police decision-making
title_full_unstemmed Artificial fairness? Trust in algorithmic police decision-making
title_short Artificial fairness? Trust in algorithmic police decision-making
title_sort artificial fairness? trust in algorithmic police decision-making
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8435155/
https://www.ncbi.nlm.nih.gov/pubmed/34539294
http://dx.doi.org/10.1007/s11292-021-09484-9
work_keys_str_mv AT hobsonzoe artificialfairnesstrustinalgorithmicpolicedecisionmaking
AT yesbergjuliaa artificialfairnesstrustinalgorithmicpolicedecisionmaking
AT bradfordben artificialfairnesstrustinalgorithmicpolicedecisionmaking
AT jacksonjonathan artificialfairnesstrustinalgorithmicpolicedecisionmaking