Cargando…
A Transformer-Based Capsule Network for 3D Part–Whole Relationship Learning
Learning the relationship between the part and whole of an object, such as humans recognizing objects, is a challenging task. In this paper, we specifically design a novel neural network to explore the local-to-global cognition of 3D models and the aggregation of structural contextual features in 3D...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9141038/ https://www.ncbi.nlm.nih.gov/pubmed/35626562 http://dx.doi.org/10.3390/e24050678 |
_version_ | 1784715246926036992 |
---|---|
author | Chen, Yu Zhao, Jieyu Qiu, Qilu |
author_facet | Chen, Yu Zhao, Jieyu Qiu, Qilu |
author_sort | Chen, Yu |
collection | PubMed |
description | Learning the relationship between the part and whole of an object, such as humans recognizing objects, is a challenging task. In this paper, we specifically design a novel neural network to explore the local-to-global cognition of 3D models and the aggregation of structural contextual features in 3D space, inspired by the recent success of Transformer in natural language processing (NLP) and impressive strides in image analysis tasks such as image classification and object detection. We build a 3D shape Transformer based on local shape representation, which provides relation learning between local patches on 3D mesh models. Similar to token (word) states in NLP, we propose local shape tokens to encode local geometric information. On this basis, we design a shape-Transformer-based capsule routing algorithm. By applying an iterative capsule routing algorithm, local shape information can be further aggregated into high-level capsules containing deeper contextual information so as to realize the cognition from the local to the whole. We performed classification tasks on the deformable 3D object data sets SHREC10 and SHREC15 and the large data set ModelNet40, and obtained profound results, which shows that our model has excellent performance in complex 3D model recognition and big data feature learning. |
format | Online Article Text |
id | pubmed-9141038 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-91410382022-05-28 A Transformer-Based Capsule Network for 3D Part–Whole Relationship Learning Chen, Yu Zhao, Jieyu Qiu, Qilu Entropy (Basel) Article Learning the relationship between the part and whole of an object, such as humans recognizing objects, is a challenging task. In this paper, we specifically design a novel neural network to explore the local-to-global cognition of 3D models and the aggregation of structural contextual features in 3D space, inspired by the recent success of Transformer in natural language processing (NLP) and impressive strides in image analysis tasks such as image classification and object detection. We build a 3D shape Transformer based on local shape representation, which provides relation learning between local patches on 3D mesh models. Similar to token (word) states in NLP, we propose local shape tokens to encode local geometric information. On this basis, we design a shape-Transformer-based capsule routing algorithm. By applying an iterative capsule routing algorithm, local shape information can be further aggregated into high-level capsules containing deeper contextual information so as to realize the cognition from the local to the whole. We performed classification tasks on the deformable 3D object data sets SHREC10 and SHREC15 and the large data set ModelNet40, and obtained profound results, which shows that our model has excellent performance in complex 3D model recognition and big data feature learning. MDPI 2022-05-11 /pmc/articles/PMC9141038/ /pubmed/35626562 http://dx.doi.org/10.3390/e24050678 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Chen, Yu Zhao, Jieyu Qiu, Qilu A Transformer-Based Capsule Network for 3D Part–Whole Relationship Learning |
title | A Transformer-Based Capsule Network for 3D Part–Whole Relationship Learning |
title_full | A Transformer-Based Capsule Network for 3D Part–Whole Relationship Learning |
title_fullStr | A Transformer-Based Capsule Network for 3D Part–Whole Relationship Learning |
title_full_unstemmed | A Transformer-Based Capsule Network for 3D Part–Whole Relationship Learning |
title_short | A Transformer-Based Capsule Network for 3D Part–Whole Relationship Learning |
title_sort | transformer-based capsule network for 3d part–whole relationship learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9141038/ https://www.ncbi.nlm.nih.gov/pubmed/35626562 http://dx.doi.org/10.3390/e24050678 |
work_keys_str_mv | AT chenyu atransformerbasedcapsulenetworkfor3dpartwholerelationshiplearning AT zhaojieyu atransformerbasedcapsulenetworkfor3dpartwholerelationshiplearning AT qiuqilu atransformerbasedcapsulenetworkfor3dpartwholerelationshiplearning AT chenyu transformerbasedcapsulenetworkfor3dpartwholerelationshiplearning AT zhaojieyu transformerbasedcapsulenetworkfor3dpartwholerelationshiplearning AT qiuqilu transformerbasedcapsulenetworkfor3dpartwholerelationshiplearning |