Cargando…

When will AI misclassify? Intuiting failures on natural images

Machine recognition systems now rival humans in their ability to classify natural images. However, their success is accompanied by a striking failure: a tendency to commit bizarre misclassifications on inputs specifically selected to fool them. What do ordinary people know about the nature and preva...

Descripción completa

Detalles Bibliográficos
Autores principales: Nartker, Makaela, Zhou, Zhenglong, Firestone, Chaz
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Association for Research in Vision and Ophthalmology 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10082388/
https://www.ncbi.nlm.nih.gov/pubmed/37022698
http://dx.doi.org/10.1167/jov.23.4.4
_version_ 1785021306913161216
author Nartker, Makaela
Zhou, Zhenglong
Firestone, Chaz
author_facet Nartker, Makaela
Zhou, Zhenglong
Firestone, Chaz
author_sort Nartker, Makaela
collection PubMed
description Machine recognition systems now rival humans in their ability to classify natural images. However, their success is accompanied by a striking failure: a tendency to commit bizarre misclassifications on inputs specifically selected to fool them. What do ordinary people know about the nature and prevalence of such classification errors? Here, five experiments exploit the recent discovery of “natural adversarial examples” to ask whether naive observers can predict when and how machines will misclassify natural images. Whereas classical adversarial examples are inputs that have been minimally perturbed to induce misclassifications, natural adversarial examples are simply unmodified natural photographs that consistently fool a wide variety of machine recognition systems. For example, a bird casting a shadow might be misclassified as a sundial, or a beach umbrella made of straw might be misclassified as a broom. In Experiment 1, subjects accurately predicted which natural images machines would misclassify and which they would not. Experiments 2 through 4 extended this ability to how the images would be misclassified, showing that anticipating machine misclassifications goes beyond merely identifying an image as nonprototypical. Finally, Experiment 5 replicated these findings under more ecologically valid conditions, demonstrating that subjects can anticipate misclassifications not only under two-alternative forced-choice conditions (as in Experiments 1–4), but also when the images appear one at a time in a continuous stream—a skill that may be of value to human–machine teams. We suggest that ordinary people can intuit how easy or hard a natural image is to classify, and we discuss the implications of these results for practical and theoretical issues at the interface of biological and artificial vision.
format Online
Article
Text
id pubmed-10082388
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher The Association for Research in Vision and Ophthalmology
record_format MEDLINE/PubMed
spelling pubmed-100823882023-04-09 When will AI misclassify? Intuiting failures on natural images Nartker, Makaela Zhou, Zhenglong Firestone, Chaz J Vis Article Machine recognition systems now rival humans in their ability to classify natural images. However, their success is accompanied by a striking failure: a tendency to commit bizarre misclassifications on inputs specifically selected to fool them. What do ordinary people know about the nature and prevalence of such classification errors? Here, five experiments exploit the recent discovery of “natural adversarial examples” to ask whether naive observers can predict when and how machines will misclassify natural images. Whereas classical adversarial examples are inputs that have been minimally perturbed to induce misclassifications, natural adversarial examples are simply unmodified natural photographs that consistently fool a wide variety of machine recognition systems. For example, a bird casting a shadow might be misclassified as a sundial, or a beach umbrella made of straw might be misclassified as a broom. In Experiment 1, subjects accurately predicted which natural images machines would misclassify and which they would not. Experiments 2 through 4 extended this ability to how the images would be misclassified, showing that anticipating machine misclassifications goes beyond merely identifying an image as nonprototypical. Finally, Experiment 5 replicated these findings under more ecologically valid conditions, demonstrating that subjects can anticipate misclassifications not only under two-alternative forced-choice conditions (as in Experiments 1–4), but also when the images appear one at a time in a continuous stream—a skill that may be of value to human–machine teams. We suggest that ordinary people can intuit how easy or hard a natural image is to classify, and we discuss the implications of these results for practical and theoretical issues at the interface of biological and artificial vision. The Association for Research in Vision and Ophthalmology 2023-04-06 /pmc/articles/PMC10082388/ /pubmed/37022698 http://dx.doi.org/10.1167/jov.23.4.4 Text en Copyright 2023 The Authors https://creativecommons.org/licenses/by-nc-nd/4.0/This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
spellingShingle Article
Nartker, Makaela
Zhou, Zhenglong
Firestone, Chaz
When will AI misclassify? Intuiting failures on natural images
title When will AI misclassify? Intuiting failures on natural images
title_full When will AI misclassify? Intuiting failures on natural images
title_fullStr When will AI misclassify? Intuiting failures on natural images
title_full_unstemmed When will AI misclassify? Intuiting failures on natural images
title_short When will AI misclassify? Intuiting failures on natural images
title_sort when will ai misclassify? intuiting failures on natural images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10082388/
https://www.ncbi.nlm.nih.gov/pubmed/37022698
http://dx.doi.org/10.1167/jov.23.4.4
work_keys_str_mv AT nartkermakaela whenwillaimisclassifyintuitingfailuresonnaturalimages
AT zhouzhenglong whenwillaimisclassifyintuitingfailuresonnaturalimages
AT firestonechaz whenwillaimisclassifyintuitingfailuresonnaturalimages