Cargando…

Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making

In face recognition applications, humans often team with algorithms, reviewing algorithm results to make an identity decision. However, few studies have explicitly measured how algorithms influence human face matching performance. One study that did examine this interaction found a concerning deteri...

Descripción completa

Detalles Bibliográficos
Autores principales: Howard, John J., Rabbitt, Laura R., Sirotin, Yevgeniy B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7444527/
https://www.ncbi.nlm.nih.gov/pubmed/32822441
http://dx.doi.org/10.1371/journal.pone.0237855
_version_ 1783573824228294656
author Howard, John J.
Rabbitt, Laura R.
Sirotin, Yevgeniy B.
author_facet Howard, John J.
Rabbitt, Laura R.
Sirotin, Yevgeniy B.
author_sort Howard, John J.
collection PubMed
description In face recognition applications, humans often team with algorithms, reviewing algorithm results to make an identity decision. However, few studies have explicitly measured how algorithms influence human face matching performance. One study that did examine this interaction found a concerning deterioration of human accuracy in the presence of algorithm errors. We conducted an experiment to examine how prior face identity decisions influence subsequent human judgements about face similarity. 376 volunteers were asked to rate the similarity of face pairs along a scale. Volunteers performing the task were told that they were reviewing identity decisions made by different sources, either a computer or human, or were told to make their own judgement without prior information. Replicating past results, we found that prior identity decisions, presented as labels, influenced volunteers’ own identity judgements. We extend these results as follows. First, we show that the influence of identity decision labels was independent of indicated decision source (human or computer) despite volunteers’ greater distrust of human identification ability. Second, applying a signal detection theory framework, we show that prior identity decision labels did not reduce volunteers’ attention to the face pair. Discrimination performance was the same with and without the labels. Instead, prior identity decision labels altered volunteers’ internal criterion used to judge a face pair as “matching” or “non-matching”. This shifted volunteers’ face pair similarity judgements by a full step along the response scale. Our work shows how human face matching is affected by prior identity decision labels and we discuss how this may limit the total accuracy of human-algorithm teams performing face matching tasks.
format Online
Article
Text
id pubmed-7444527
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-74445272020-08-27 Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making Howard, John J. Rabbitt, Laura R. Sirotin, Yevgeniy B. PLoS One Research Article In face recognition applications, humans often team with algorithms, reviewing algorithm results to make an identity decision. However, few studies have explicitly measured how algorithms influence human face matching performance. One study that did examine this interaction found a concerning deterioration of human accuracy in the presence of algorithm errors. We conducted an experiment to examine how prior face identity decisions influence subsequent human judgements about face similarity. 376 volunteers were asked to rate the similarity of face pairs along a scale. Volunteers performing the task were told that they were reviewing identity decisions made by different sources, either a computer or human, or were told to make their own judgement without prior information. Replicating past results, we found that prior identity decisions, presented as labels, influenced volunteers’ own identity judgements. We extend these results as follows. First, we show that the influence of identity decision labels was independent of indicated decision source (human or computer) despite volunteers’ greater distrust of human identification ability. Second, applying a signal detection theory framework, we show that prior identity decision labels did not reduce volunteers’ attention to the face pair. Discrimination performance was the same with and without the labels. Instead, prior identity decision labels altered volunteers’ internal criterion used to judge a face pair as “matching” or “non-matching”. This shifted volunteers’ face pair similarity judgements by a full step along the response scale. Our work shows how human face matching is affected by prior identity decision labels and we discuss how this may limit the total accuracy of human-algorithm teams performing face matching tasks. Public Library of Science 2020-08-21 /pmc/articles/PMC7444527/ /pubmed/32822441 http://dx.doi.org/10.1371/journal.pone.0237855 Text en © 2020 Howard et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Howard, John J.
Rabbitt, Laura R.
Sirotin, Yevgeniy B.
Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making
title Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making
title_full Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making
title_fullStr Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making
title_full_unstemmed Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making
title_short Human-algorithm teaming in face recognition: How algorithm outcomes cognitively bias human decision-making
title_sort human-algorithm teaming in face recognition: how algorithm outcomes cognitively bias human decision-making
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7444527/
https://www.ncbi.nlm.nih.gov/pubmed/32822441
http://dx.doi.org/10.1371/journal.pone.0237855
work_keys_str_mv AT howardjohnj humanalgorithmteaminginfacerecognitionhowalgorithmoutcomescognitivelybiashumandecisionmaking
AT rabbittlaurar humanalgorithmteaminginfacerecognitionhowalgorithmoutcomescognitivelybiashumandecisionmaking
AT sirotinyevgeniyb humanalgorithmteaminginfacerecognitionhowalgorithmoutcomescognitivelybiashumandecisionmaking