Cargando…
Editorial: Cross-Modal Learning: Adaptivity, Prediction and Interaction
Autores principales: | Zhang, Jianwei, Wermter, Stefan, Sun, Fuchun, Zhang, Changshui, Engel, Andreas K., Röder, Brigitte, Fu, Xiaolan, Xue, Gui |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9016844/ https://www.ncbi.nlm.nih.gov/pubmed/35449798 http://dx.doi.org/10.3389/fnbot.2022.889911 |
Ejemplares similares
-
Attention Based Visual Analysis for Fast Grasp Planning With a Multi-Fingered Robotic Hand
por: Deng, Zhen, et al.
Publicado: (2019) -
Learning Then, Learning Now, and Every Second in Between: Lifelong Learning With a Simulated Humanoid Robot
por: Logacjov, Aleksej, et al.
Publicado: (2021) -
Teaching NICO How to Grasp: An Empirical Study on Crossmodal Social Interaction as a Key Factor for Robots Learning From Humans
por: Kerzel, Matthias, et al.
Publicado: (2020) -
Learning indoor robot navigation using visual and sensorimotor map information
por: Yan, Wenjie, et al.
Publicado: (2013) -
Improving Robot Motor Learning with Negatively Valenced Reinforcement Signals
por: Navarro-Guerrero, Nicolás, et al.
Publicado: (2017)