Cargando…

The Multiscale Surface Vision Transformer

Surface meshes are a favoured domain for representing structural and functional information on the human cortex, but their complex topology and geometry pose significant challenges for deep learning analysis. While Transformers have excelled as domain-agnostic architectures for sequence-to-sequence...

Descripción completa

Detalles Bibliográficos
Autores principales: Dahan, Simon, Fawaz, Abdulah, Suliman, Mohamed A., da Silva, Mariana, Williams, Logan Z. J., Rueckert, Daniel, Robinson, Emma C.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cornell University 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055498/
https://www.ncbi.nlm.nih.gov/pubmed/36994163
_version_ 1785015885832912896
author Dahan, Simon
Fawaz, Abdulah
Suliman, Mohamed A.
da Silva, Mariana
Williams, Logan Z. J.
Rueckert, Daniel
Robinson, Emma C.
author_facet Dahan, Simon
Fawaz, Abdulah
Suliman, Mohamed A.
da Silva, Mariana
Williams, Logan Z. J.
Rueckert, Daniel
Robinson, Emma C.
author_sort Dahan, Simon
collection PubMed
description Surface meshes are a favoured domain for representing structural and functional information on the human cortex, but their complex topology and geometry pose significant challenges for deep learning analysis. While Transformers have excelled as domain-agnostic architectures for sequence-to-sequence learning, notably for structures where the translation of the convolution operation is non-trivial, the quadratic cost of the self-attention operation remains an obstacle for many dense prediction tasks. Inspired by some of the latest advances in hierarchical modelling with vision transformers, we introduce the Multiscale Surface Vision Transformer (MS-SiT) as a backbone architecture for surface deep learning. The self-attention mechanism is applied within local-mesh-windows to allow for high-resolution sampling of the underlying data, while a shifted-window strategy improves the sharing of information between windows. Neighbouring patches are successively merged, allowing the MS-SiT to learn hierarchical representations suitable for any prediction task. Results demonstrate that the MS-SiT outperforms existing surface deep learning methods for neonatal phenotyping prediction tasks using the Developing Human Connectome Project (dHCP) dataset. Furthermore, building the MS-SiT backbone into a U-shaped architecture for surface segmentation demonstrates competitive results on cortical parcellation using the UK Biobank (UKB) and manually-annotated MindBoggle datasets. Code and trained models are publicly available at https://github.com/metrics-lab/surface-vision-transformers.
format Online
Article
Text
id pubmed-10055498
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Cornell University
record_format MEDLINE/PubMed
spelling pubmed-100554982023-03-30 The Multiscale Surface Vision Transformer Dahan, Simon Fawaz, Abdulah Suliman, Mohamed A. da Silva, Mariana Williams, Logan Z. J. Rueckert, Daniel Robinson, Emma C. ArXiv Article Surface meshes are a favoured domain for representing structural and functional information on the human cortex, but their complex topology and geometry pose significant challenges for deep learning analysis. While Transformers have excelled as domain-agnostic architectures for sequence-to-sequence learning, notably for structures where the translation of the convolution operation is non-trivial, the quadratic cost of the self-attention operation remains an obstacle for many dense prediction tasks. Inspired by some of the latest advances in hierarchical modelling with vision transformers, we introduce the Multiscale Surface Vision Transformer (MS-SiT) as a backbone architecture for surface deep learning. The self-attention mechanism is applied within local-mesh-windows to allow for high-resolution sampling of the underlying data, while a shifted-window strategy improves the sharing of information between windows. Neighbouring patches are successively merged, allowing the MS-SiT to learn hierarchical representations suitable for any prediction task. Results demonstrate that the MS-SiT outperforms existing surface deep learning methods for neonatal phenotyping prediction tasks using the Developing Human Connectome Project (dHCP) dataset. Furthermore, building the MS-SiT backbone into a U-shaped architecture for surface segmentation demonstrates competitive results on cortical parcellation using the UK Biobank (UKB) and manually-annotated MindBoggle datasets. Code and trained models are publicly available at https://github.com/metrics-lab/surface-vision-transformers. Cornell University 2023-03-21 /pmc/articles/PMC10055498/ /pubmed/36994163 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.
spellingShingle Article
Dahan, Simon
Fawaz, Abdulah
Suliman, Mohamed A.
da Silva, Mariana
Williams, Logan Z. J.
Rueckert, Daniel
Robinson, Emma C.
The Multiscale Surface Vision Transformer
title The Multiscale Surface Vision Transformer
title_full The Multiscale Surface Vision Transformer
title_fullStr The Multiscale Surface Vision Transformer
title_full_unstemmed The Multiscale Surface Vision Transformer
title_short The Multiscale Surface Vision Transformer
title_sort multiscale surface vision transformer
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055498/
https://www.ncbi.nlm.nih.gov/pubmed/36994163
work_keys_str_mv AT dahansimon themultiscalesurfacevisiontransformer
AT fawazabdulah themultiscalesurfacevisiontransformer
AT sulimanmohameda themultiscalesurfacevisiontransformer
AT dasilvamariana themultiscalesurfacevisiontransformer
AT williamsloganzj themultiscalesurfacevisiontransformer
AT rueckertdaniel themultiscalesurfacevisiontransformer
AT robinsonemmac themultiscalesurfacevisiontransformer
AT dahansimon multiscalesurfacevisiontransformer
AT fawazabdulah multiscalesurfacevisiontransformer
AT sulimanmohameda multiscalesurfacevisiontransformer
AT dasilvamariana multiscalesurfacevisiontransformer
AT williamsloganzj multiscalesurfacevisiontransformer
AT rueckertdaniel multiscalesurfacevisiontransformer
AT robinsonemmac multiscalesurfacevisiontransformer