Cargando…

Interpreting vision and language generative models with semantic visual priors

When applied to Image-to-text models, explainability methods have two challenges. First, they often provide token-by-token explanations namely, they compute a visual explanation for each token of the generated sequence. This makes explanations expensive to compute and unable to comprehensively expla...

Descripción completa

Detalles Bibliográficos
Autores principales: Cafagna, Michele, Rojas-Barahona, Lina M., van Deemter, Kees, Gatt, Albert
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10561255/
https://www.ncbi.nlm.nih.gov/pubmed/37818428
http://dx.doi.org/10.3389/frai.2023.1220476
_version_ 1785117882335625216
author Cafagna, Michele
Rojas-Barahona, Lina M.
van Deemter, Kees
Gatt, Albert
author_facet Cafagna, Michele
Rojas-Barahona, Lina M.
van Deemter, Kees
Gatt, Albert
author_sort Cafagna, Michele
collection PubMed
description When applied to Image-to-text models, explainability methods have two challenges. First, they often provide token-by-token explanations namely, they compute a visual explanation for each token of the generated sequence. This makes explanations expensive to compute and unable to comprehensively explain the model's output. Second, for models with visual inputs, explainability methods such as SHAP typically consider superpixels as features. Since superpixels do not correspond to semantically meaningful regions of an image, this makes explanations harder to interpret. We develop a framework based on SHAP, that allows for generating comprehensive, meaningful explanations leveraging the meaning representation of the output sequence as a whole. Moreover, by exploiting semantic priors in the visual backbone, we extract an arbitrary number of features that allows the efficient computation of Shapley values on large-scale models, generating at the same time highly meaningful visual explanations. We demonstrate that our method generates semantically more expressive explanations than traditional methods at a lower compute cost and that it can be generalized to a large family of vision-language models.
format Online
Article
Text
id pubmed-10561255
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-105612552023-10-10 Interpreting vision and language generative models with semantic visual priors Cafagna, Michele Rojas-Barahona, Lina M. van Deemter, Kees Gatt, Albert Front Artif Intell Artificial Intelligence When applied to Image-to-text models, explainability methods have two challenges. First, they often provide token-by-token explanations namely, they compute a visual explanation for each token of the generated sequence. This makes explanations expensive to compute and unable to comprehensively explain the model's output. Second, for models with visual inputs, explainability methods such as SHAP typically consider superpixels as features. Since superpixels do not correspond to semantically meaningful regions of an image, this makes explanations harder to interpret. We develop a framework based on SHAP, that allows for generating comprehensive, meaningful explanations leveraging the meaning representation of the output sequence as a whole. Moreover, by exploiting semantic priors in the visual backbone, we extract an arbitrary number of features that allows the efficient computation of Shapley values on large-scale models, generating at the same time highly meaningful visual explanations. We demonstrate that our method generates semantically more expressive explanations than traditional methods at a lower compute cost and that it can be generalized to a large family of vision-language models. Frontiers Media S.A. 2023-09-25 /pmc/articles/PMC10561255/ /pubmed/37818428 http://dx.doi.org/10.3389/frai.2023.1220476 Text en Copyright © 2023 Cafagna, Rojas-Barahona, van Deemter and Gatt. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Cafagna, Michele
Rojas-Barahona, Lina M.
van Deemter, Kees
Gatt, Albert
Interpreting vision and language generative models with semantic visual priors
title Interpreting vision and language generative models with semantic visual priors
title_full Interpreting vision and language generative models with semantic visual priors
title_fullStr Interpreting vision and language generative models with semantic visual priors
title_full_unstemmed Interpreting vision and language generative models with semantic visual priors
title_short Interpreting vision and language generative models with semantic visual priors
title_sort interpreting vision and language generative models with semantic visual priors
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10561255/
https://www.ncbi.nlm.nih.gov/pubmed/37818428
http://dx.doi.org/10.3389/frai.2023.1220476
work_keys_str_mv AT cafagnamichele interpretingvisionandlanguagegenerativemodelswithsemanticvisualpriors
AT rojasbarahonalinam interpretingvisionandlanguagegenerativemodelswithsemanticvisualpriors
AT vandeemterkees interpretingvisionandlanguagegenerativemodelswithsemanticvisualpriors
AT gattalbert interpretingvisionandlanguagegenerativemodelswithsemanticvisualpriors