Cargando…

Unsupervised learning reveals interpretable latent representations for translucency perception

Humans constantly assess the appearance of materials to plan actions, such as stepping on icy roads without slipping. Visual inference of materials is important but challenging because a given material can appear dramatically different in various scenes. This problem especially stands out for transl...

Descripción completa

Detalles Bibliográficos
Autores principales: Liao, Chenxi, Sawayama, Masataka, Xiao, Bei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9942964/
https://www.ncbi.nlm.nih.gov/pubmed/36753520
http://dx.doi.org/10.1371/journal.pcbi.1010878
_version_ 1784891607904944128
author Liao, Chenxi
Sawayama, Masataka
Xiao, Bei
author_facet Liao, Chenxi
Sawayama, Masataka
Xiao, Bei
author_sort Liao, Chenxi
collection PubMed
description Humans constantly assess the appearance of materials to plan actions, such as stepping on icy roads without slipping. Visual inference of materials is important but challenging because a given material can appear dramatically different in various scenes. This problem especially stands out for translucent materials, whose appearance strongly depends on lighting, geometry, and viewpoint. Despite this, humans can still distinguish between different materials, and it remains unsolved how to systematically discover visual features pertinent to material inference from natural images. Here, we develop an unsupervised style-based image generation model to identify perceptually relevant dimensions for translucent material appearances from photographs. We find our model, with its layer-wise latent representation, can synthesize images of diverse and realistic materials. Importantly, without supervision, human-understandable scene attributes, including the object’s shape, material, and body color, spontaneously emerge in the model’s layer-wise latent space in a scale-specific manner. By embedding an image into the learned latent space, we can manipulate specific layers’ latent code to modify the appearance of the object in the image. Specifically, we find that manipulation on the early-layers (coarse spatial scale) transforms the object’s shape, while manipulation on the later-layers (fine spatial scale) modifies its body color. The middle-layers of the latent space selectively encode translucency features and manipulation of such layers coherently modifies the translucency appearance, without changing the object’s shape or body color. Moreover, we find the middle-layers of the latent space can successfully predict human translucency ratings, suggesting that translucent impressions are established in mid-to-low spatial scale features. This layer-wise latent representation allows us to systematically discover perceptually relevant image features for human translucency perception. Together, our findings reveal that learning the scale-specific statistical structure of natural images might be crucial for humans to efficiently represent material properties across contexts.
format Online
Article
Text
id pubmed-9942964
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-99429642023-02-22 Unsupervised learning reveals interpretable latent representations for translucency perception Liao, Chenxi Sawayama, Masataka Xiao, Bei PLoS Comput Biol Research Article Humans constantly assess the appearance of materials to plan actions, such as stepping on icy roads without slipping. Visual inference of materials is important but challenging because a given material can appear dramatically different in various scenes. This problem especially stands out for translucent materials, whose appearance strongly depends on lighting, geometry, and viewpoint. Despite this, humans can still distinguish between different materials, and it remains unsolved how to systematically discover visual features pertinent to material inference from natural images. Here, we develop an unsupervised style-based image generation model to identify perceptually relevant dimensions for translucent material appearances from photographs. We find our model, with its layer-wise latent representation, can synthesize images of diverse and realistic materials. Importantly, without supervision, human-understandable scene attributes, including the object’s shape, material, and body color, spontaneously emerge in the model’s layer-wise latent space in a scale-specific manner. By embedding an image into the learned latent space, we can manipulate specific layers’ latent code to modify the appearance of the object in the image. Specifically, we find that manipulation on the early-layers (coarse spatial scale) transforms the object’s shape, while manipulation on the later-layers (fine spatial scale) modifies its body color. The middle-layers of the latent space selectively encode translucency features and manipulation of such layers coherently modifies the translucency appearance, without changing the object’s shape or body color. Moreover, we find the middle-layers of the latent space can successfully predict human translucency ratings, suggesting that translucent impressions are established in mid-to-low spatial scale features. This layer-wise latent representation allows us to systematically discover perceptually relevant image features for human translucency perception. Together, our findings reveal that learning the scale-specific statistical structure of natural images might be crucial for humans to efficiently represent material properties across contexts. Public Library of Science 2023-02-08 /pmc/articles/PMC9942964/ /pubmed/36753520 http://dx.doi.org/10.1371/journal.pcbi.1010878 Text en © 2023 Liao et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Liao, Chenxi
Sawayama, Masataka
Xiao, Bei
Unsupervised learning reveals interpretable latent representations for translucency perception
title Unsupervised learning reveals interpretable latent representations for translucency perception
title_full Unsupervised learning reveals interpretable latent representations for translucency perception
title_fullStr Unsupervised learning reveals interpretable latent representations for translucency perception
title_full_unstemmed Unsupervised learning reveals interpretable latent representations for translucency perception
title_short Unsupervised learning reveals interpretable latent representations for translucency perception
title_sort unsupervised learning reveals interpretable latent representations for translucency perception
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9942964/
https://www.ncbi.nlm.nih.gov/pubmed/36753520
http://dx.doi.org/10.1371/journal.pcbi.1010878
work_keys_str_mv AT liaochenxi unsupervisedlearningrevealsinterpretablelatentrepresentationsfortranslucencyperception
AT sawayamamasataka unsupervisedlearningrevealsinterpretablelatentrepresentationsfortranslucencyperception
AT xiaobei unsupervisedlearningrevealsinterpretablelatentrepresentationsfortranslucencyperception