Cargando…
Learning to perform role-filler binding with schematic knowledge
Through specific experiences, humans learn the relationships that underlie the structure of events in the world. Schema theory suggests that we organize this information in mental frameworks called “schemata,” which represent our knowledge of the structure of the world. Generalizing knowledge of str...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8019313/ https://www.ncbi.nlm.nih.gov/pubmed/33850650 http://dx.doi.org/10.7717/peerj.11046 |
Sumario: | Through specific experiences, humans learn the relationships that underlie the structure of events in the world. Schema theory suggests that we organize this information in mental frameworks called “schemata,” which represent our knowledge of the structure of the world. Generalizing knowledge of structural relationships to new situations requires role-filler binding, the ability to associate specific “fillers” with abstract “roles.” For instance, when we hear the sentence Alice ordered a tea from Bob, the role-filler bindings customer:Alice, drink:tea and barista:Bob allow us to understand and make inferences about the sentence. We can perform these bindings for arbitrary fillers—we understand this sentence even if we have never heard the names Alice, tea, or Bob before. In this work, we define a model as capable of performing role-filler binding if it can recall arbitrary fillers corresponding to a specified role, even when these pairings violate correlations seen during training. Previous work found that models can learn this ability when explicitly told what the roles and fillers are, or when given fillers seen during training. We show that networks with external memory learn to bind roles to arbitrary fillers, without explicitly labeled role-filler pairs. We further show that they can perform these bindings on role-filler pairs that violate correlations seen during training, while retaining knowledge of training correlations. We apply analyses inspired by neural decoding to interpret what the networks have learned. |
---|