Cargando…
Human-like scene interpretation by a guided counterstream processing
In modeling vision, there has been a remarkable progress in recognizing a range of scene components, but the problem of analyzing full scenes, an ultimate goal of visual perception, is still largely open. To deal with complete scenes, recent work focused on the training of models for extracting the...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
National Academy of Sciences
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10556630/ https://www.ncbi.nlm.nih.gov/pubmed/37769256 http://dx.doi.org/10.1073/pnas.2211179120 |
_version_ | 1785116909236125696 |
---|---|
author | Ullman, Shimon Assif, Liav Strugatski, Alona Vatashsky, Ben-Zion Levi, Hila Netanyahu, Aviv Yaari, Adam |
author_facet | Ullman, Shimon Assif, Liav Strugatski, Alona Vatashsky, Ben-Zion Levi, Hila Netanyahu, Aviv Yaari, Adam |
author_sort | Ullman, Shimon |
collection | PubMed |
description | In modeling vision, there has been a remarkable progress in recognizing a range of scene components, but the problem of analyzing full scenes, an ultimate goal of visual perception, is still largely open. To deal with complete scenes, recent work focused on the training of models for extracting the full graph-like structure of a scene. In contrast with scene graphs, humans’ scene perception focuses on selected structures in the scene, starting with a limited interpretation and evolving sequentially in a goal-directed manner [G. L. Malcolm, I. I. A. Groen, C. I. Baker, Trends. Cogn. Sci. 20, 843–856 (2016)]. Guidance is crucial throughout scene interpretation since the extraction of full scene representation is often infeasible. Here, we present a model that performs human-like guided scene interpretation, using an iterative bottom–up, top–down processing, in a “counterstream” structure motivated by cortical circuitry. The process proceeds by the sequential application of top–down instructions that guide the interpretation process. The results show how scene structures of interest to the viewer are extracted by an automatically selected sequence of top–down instructions. The model shows two further benefits. One is an inherent capability to deal well with the problem of combinatorial generalization—generalizing broadly to unseen scene configurations, which is limited in current network models [B. Lake, M. Baroni, 35th International Conference on Machine Learning, ICML 2018 (2018)]. The second is the ability to combine visual with nonvisual information at each cycle of the interpretation process, which is a key aspect for modeling human perception as well as advancing AI vision systems. |
format | Online Article Text |
id | pubmed-10556630 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | National Academy of Sciences |
record_format | MEDLINE/PubMed |
spelling | pubmed-105566302023-10-07 Human-like scene interpretation by a guided counterstream processing Ullman, Shimon Assif, Liav Strugatski, Alona Vatashsky, Ben-Zion Levi, Hila Netanyahu, Aviv Yaari, Adam Proc Natl Acad Sci U S A Biological Sciences In modeling vision, there has been a remarkable progress in recognizing a range of scene components, but the problem of analyzing full scenes, an ultimate goal of visual perception, is still largely open. To deal with complete scenes, recent work focused on the training of models for extracting the full graph-like structure of a scene. In contrast with scene graphs, humans’ scene perception focuses on selected structures in the scene, starting with a limited interpretation and evolving sequentially in a goal-directed manner [G. L. Malcolm, I. I. A. Groen, C. I. Baker, Trends. Cogn. Sci. 20, 843–856 (2016)]. Guidance is crucial throughout scene interpretation since the extraction of full scene representation is often infeasible. Here, we present a model that performs human-like guided scene interpretation, using an iterative bottom–up, top–down processing, in a “counterstream” structure motivated by cortical circuitry. The process proceeds by the sequential application of top–down instructions that guide the interpretation process. The results show how scene structures of interest to the viewer are extracted by an automatically selected sequence of top–down instructions. The model shows two further benefits. One is an inherent capability to deal well with the problem of combinatorial generalization—generalizing broadly to unseen scene configurations, which is limited in current network models [B. Lake, M. Baroni, 35th International Conference on Machine Learning, ICML 2018 (2018)]. The second is the ability to combine visual with nonvisual information at each cycle of the interpretation process, which is a key aspect for modeling human perception as well as advancing AI vision systems. National Academy of Sciences 2023-09-28 2023-10-03 /pmc/articles/PMC10556630/ /pubmed/37769256 http://dx.doi.org/10.1073/pnas.2211179120 Text en Copyright © 2023 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by/4.0/This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY) (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Biological Sciences Ullman, Shimon Assif, Liav Strugatski, Alona Vatashsky, Ben-Zion Levi, Hila Netanyahu, Aviv Yaari, Adam Human-like scene interpretation by a guided counterstream processing |
title | Human-like scene interpretation by a guided counterstream processing |
title_full | Human-like scene interpretation by a guided counterstream processing |
title_fullStr | Human-like scene interpretation by a guided counterstream processing |
title_full_unstemmed | Human-like scene interpretation by a guided counterstream processing |
title_short | Human-like scene interpretation by a guided counterstream processing |
title_sort | human-like scene interpretation by a guided counterstream processing |
topic | Biological Sciences |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10556630/ https://www.ncbi.nlm.nih.gov/pubmed/37769256 http://dx.doi.org/10.1073/pnas.2211179120 |
work_keys_str_mv | AT ullmanshimon humanlikesceneinterpretationbyaguidedcounterstreamprocessing AT assifliav humanlikesceneinterpretationbyaguidedcounterstreamprocessing AT strugatskialona humanlikesceneinterpretationbyaguidedcounterstreamprocessing AT vatashskybenzion humanlikesceneinterpretationbyaguidedcounterstreamprocessing AT levihila humanlikesceneinterpretationbyaguidedcounterstreamprocessing AT netanyahuaviv humanlikesceneinterpretationbyaguidedcounterstreamprocessing AT yaariadam humanlikesceneinterpretationbyaguidedcounterstreamprocessing |