Cargando…

Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise

In everyday life, we rely on human experts to make a variety of complex decisions, such as medical diagnoses. These decisions are typically made through some form of weakly guided learning, a form of learning in which decision expertise is gained through labeled examples rather than explicit instruc...

Descripción completa

Detalles Bibliográficos
Autores principales: Hegdé, Jay, Bart, Evgeniy
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6194166/
https://www.ncbi.nlm.nih.gov/pubmed/30369862
http://dx.doi.org/10.3389/fnins.2018.00670
_version_ 1783364182861676544
author Hegdé, Jay
Bart, Evgeniy
author_facet Hegdé, Jay
Bart, Evgeniy
author_sort Hegdé, Jay
collection PubMed
description In everyday life, we rely on human experts to make a variety of complex decisions, such as medical diagnoses. These decisions are typically made through some form of weakly guided learning, a form of learning in which decision expertise is gained through labeled examples rather than explicit instructions. Expert decisions can significantly affect people other than the decision-maker (for example, teammates, clients, or patients), but may seem cryptic and mysterious to them. It is therefore desirable for the decision-maker to explain the rationale behind these decisions to others. This, however, can be difficult to do. Often, the expert has a “gut feeling” for what the correct decision is, but may have difficulty giving an objective set of criteria for arriving at it. Explainability of human expert decisions, i.e., the extent to which experts can make their decisions understandable to others, has not been studied systematically. Here, we characterize the explainability of human decision-making, using binary categorical decisions about visual objects as an illustrative example. We trained a group of “expert” subjects to categorize novel, naturalistic 3-D objects called “digital embryos” into one of two hitherto unknown categories, using a weakly guided learning paradigm. We then asked the expert subjects to provide a written explanation for each binary decision they made. These experiments generated several intriguing findings. First, the expert’s explanations modestly improve the categorization performance of naïve users (paired t-tests, p < 0.05). Second, this improvement differed significantly between explanations. In particular, explanations that pointed to a spatially localized region of the object improved the user’s performance much better than explanations that referred to global features. Third, neither experts nor naïve subjects were able to reliably predict the degree of improvement for a given explanation. Finally, significant bias effects were observed, where naïve subjects rated an explanation significantly higher when told it comes from an expert user, compared to the rating of the same explanation when told it comes from another non-expert, suggesting a variant of the Asch conformity effect. Together, our results characterize, for the first time, the various issues, both methodological and conceptual, underlying the explainability of human decisions.
format Online
Article
Text
id pubmed-6194166
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-61941662018-10-26 Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise Hegdé, Jay Bart, Evgeniy Front Neurosci Neuroscience In everyday life, we rely on human experts to make a variety of complex decisions, such as medical diagnoses. These decisions are typically made through some form of weakly guided learning, a form of learning in which decision expertise is gained through labeled examples rather than explicit instructions. Expert decisions can significantly affect people other than the decision-maker (for example, teammates, clients, or patients), but may seem cryptic and mysterious to them. It is therefore desirable for the decision-maker to explain the rationale behind these decisions to others. This, however, can be difficult to do. Often, the expert has a “gut feeling” for what the correct decision is, but may have difficulty giving an objective set of criteria for arriving at it. Explainability of human expert decisions, i.e., the extent to which experts can make their decisions understandable to others, has not been studied systematically. Here, we characterize the explainability of human decision-making, using binary categorical decisions about visual objects as an illustrative example. We trained a group of “expert” subjects to categorize novel, naturalistic 3-D objects called “digital embryos” into one of two hitherto unknown categories, using a weakly guided learning paradigm. We then asked the expert subjects to provide a written explanation for each binary decision they made. These experiments generated several intriguing findings. First, the expert’s explanations modestly improve the categorization performance of naïve users (paired t-tests, p < 0.05). Second, this improvement differed significantly between explanations. In particular, explanations that pointed to a spatially localized region of the object improved the user’s performance much better than explanations that referred to global features. Third, neither experts nor naïve subjects were able to reliably predict the degree of improvement for a given explanation. Finally, significant bias effects were observed, where naïve subjects rated an explanation significantly higher when told it comes from an expert user, compared to the rating of the same explanation when told it comes from another non-expert, suggesting a variant of the Asch conformity effect. Together, our results characterize, for the first time, the various issues, both methodological and conceptual, underlying the explainability of human decisions. Frontiers Media S.A. 2018-10-12 /pmc/articles/PMC6194166/ /pubmed/30369862 http://dx.doi.org/10.3389/fnins.2018.00670 Text en Copyright © 2018 Hegdé and Bart. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Hegdé, Jay
Bart, Evgeniy
Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise
title Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise
title_full Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise
title_fullStr Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise
title_full_unstemmed Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise
title_short Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise
title_sort making expert decisions easier to fathom: on the explainability of visual object recognition expertise
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6194166/
https://www.ncbi.nlm.nih.gov/pubmed/30369862
http://dx.doi.org/10.3389/fnins.2018.00670
work_keys_str_mv AT hegdejay makingexpertdecisionseasiertofathomontheexplainabilityofvisualobjectrecognitionexpertise
AT bartevgeniy makingexpertdecisionseasiertofathomontheexplainabilityofvisualobjectrecognitionexpertise