Cargando…
Learning to see stuff
Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. But recent advances in unsupervised deep learning provide a framework for explaining how we learn to see them. We suggest that perception does not involve estimating physical quanti...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier B. V
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6919301/ https://www.ncbi.nlm.nih.gov/pubmed/31886321 http://dx.doi.org/10.1016/j.cobeha.2019.07.004 |
_version_ | 1783480740704419840 |
---|---|
author | Fleming, Roland W Storrs, Katherine R |
author_facet | Fleming, Roland W Storrs, Katherine R |
author_sort | Fleming, Roland W |
collection | PubMed |
description | Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. But recent advances in unsupervised deep learning provide a framework for explaining how we learn to see them. We suggest that perception does not involve estimating physical quantities like reflectance or lighting. Instead, representations emerge from learning to encode and predict the visual input as efficiently and accurately as possible. Neural networks can be trained to compress natural images or to predict frames in movies without ‘ground truth’ data about the outside world. Yet, to succeed, such systems may automatically discover how to disentangle distal causal factors. Such ‘statistical appearance models’ potentially provide a coherent explanation of both failures and successes in perception. |
format | Online Article Text |
id | pubmed-6919301 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Elsevier B. V |
record_format | MEDLINE/PubMed |
spelling | pubmed-69193012019-12-27 Learning to see stuff Fleming, Roland W Storrs, Katherine R Curr Opin Behav Sci Article Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. But recent advances in unsupervised deep learning provide a framework for explaining how we learn to see them. We suggest that perception does not involve estimating physical quantities like reflectance or lighting. Instead, representations emerge from learning to encode and predict the visual input as efficiently and accurately as possible. Neural networks can be trained to compress natural images or to predict frames in movies without ‘ground truth’ data about the outside world. Yet, to succeed, such systems may automatically discover how to disentangle distal causal factors. Such ‘statistical appearance models’ potentially provide a coherent explanation of both failures and successes in perception. Elsevier B. V 2019-12 /pmc/articles/PMC6919301/ /pubmed/31886321 http://dx.doi.org/10.1016/j.cobeha.2019.07.004 Text en © 2019 The Authors http://creativecommons.org/licenses/by-nc-nd/4.0/ This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
spellingShingle | Article Fleming, Roland W Storrs, Katherine R Learning to see stuff |
title | Learning to see stuff |
title_full | Learning to see stuff |
title_fullStr | Learning to see stuff |
title_full_unstemmed | Learning to see stuff |
title_short | Learning to see stuff |
title_sort | learning to see stuff |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6919301/ https://www.ncbi.nlm.nih.gov/pubmed/31886321 http://dx.doi.org/10.1016/j.cobeha.2019.07.004 |
work_keys_str_mv | AT flemingrolandw learningtoseestuff AT storrskatheriner learningtoseestuff |