Cargando…

Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder

INTRODUCTION: The exorbitant cost of accurately annotating the large-scale serial scanning electron microscope (SEM) images as the ground truth for training has always been a great challenge for brain map reconstruction by deep learning methods in neural connectome studies. The representation abilit...

Descripción completa

Detalles Bibliográficos
Autores principales: Cheng, Ao, Shi, Jiahao, Wang, Lirong, Zhang, Ruobing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10285402/
https://www.ncbi.nlm.nih.gov/pubmed/37360945
http://dx.doi.org/10.3389/fninf.2023.1118419
_version_ 1785061603101638656
author Cheng, Ao
Shi, Jiahao
Wang, Lirong
Zhang, Ruobing
author_facet Cheng, Ao
Shi, Jiahao
Wang, Lirong
Zhang, Ruobing
author_sort Cheng, Ao
collection PubMed
description INTRODUCTION: The exorbitant cost of accurately annotating the large-scale serial scanning electron microscope (SEM) images as the ground truth for training has always been a great challenge for brain map reconstruction by deep learning methods in neural connectome studies. The representation ability of the model is strongly correlated with the number of such high-quality labels. Recently, the masked autoencoder (MAE) has been shown to effectively pre-train Vision Transformers (ViT) to improve their representational capabilities. METHODS: In this paper, we investigated a self-pre-training paradigm for serial SEM images with MAE to implement downstream segmentation tasks. We randomly masked voxels in three-dimensional brain image patches and trained an autoencoder to reconstruct the neuronal structures. RESULTS AND DISCUSSION: We tested different pre-training and fine-tuning configurations on three different serial SEM datasets of mouse brains, including two public ones, SNEMI3D and MitoEM-R, and one acquired in our lab. A series of masking ratios were examined and the optimal ratio for pre-training efficiency was spotted for 3D segmentation. The MAE pre-training strategy significantly outperformed the supervised learning from scratch. Our work shows that the general framework of can be a unified approach for effective learning of the representation of heterogeneous neural structural features in serial SEM images to greatly facilitate brain connectome reconstruction.
format Online
Article
Text
id pubmed-10285402
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-102854022023-06-23 Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder Cheng, Ao Shi, Jiahao Wang, Lirong Zhang, Ruobing Front Neuroinform Neuroscience INTRODUCTION: The exorbitant cost of accurately annotating the large-scale serial scanning electron microscope (SEM) images as the ground truth for training has always been a great challenge for brain map reconstruction by deep learning methods in neural connectome studies. The representation ability of the model is strongly correlated with the number of such high-quality labels. Recently, the masked autoencoder (MAE) has been shown to effectively pre-train Vision Transformers (ViT) to improve their representational capabilities. METHODS: In this paper, we investigated a self-pre-training paradigm for serial SEM images with MAE to implement downstream segmentation tasks. We randomly masked voxels in three-dimensional brain image patches and trained an autoencoder to reconstruct the neuronal structures. RESULTS AND DISCUSSION: We tested different pre-training and fine-tuning configurations on three different serial SEM datasets of mouse brains, including two public ones, SNEMI3D and MitoEM-R, and one acquired in our lab. A series of masking ratios were examined and the optimal ratio for pre-training efficiency was spotted for 3D segmentation. The MAE pre-training strategy significantly outperformed the supervised learning from scratch. Our work shows that the general framework of can be a unified approach for effective learning of the representation of heterogeneous neural structural features in serial SEM images to greatly facilitate brain connectome reconstruction. Frontiers Media S.A. 2023-06-08 /pmc/articles/PMC10285402/ /pubmed/37360945 http://dx.doi.org/10.3389/fninf.2023.1118419 Text en Copyright © 2023 Cheng, Shi, Wang and Zhang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Cheng, Ao
Shi, Jiahao
Wang, Lirong
Zhang, Ruobing
Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder
title Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder
title_full Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder
title_fullStr Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder
title_full_unstemmed Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder
title_short Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder
title_sort learning the heterogeneous representation of brain's structure from serial sem images using a masked autoencoder
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10285402/
https://www.ncbi.nlm.nih.gov/pubmed/37360945
http://dx.doi.org/10.3389/fninf.2023.1118419
work_keys_str_mv AT chengao learningtheheterogeneousrepresentationofbrainsstructurefromserialsemimagesusingamaskedautoencoder
AT shijiahao learningtheheterogeneousrepresentationofbrainsstructurefromserialsemimagesusingamaskedautoencoder
AT wanglirong learningtheheterogeneousrepresentationofbrainsstructurefromserialsemimagesusingamaskedautoencoder
AT zhangruobing learningtheheterogeneousrepresentationofbrainsstructurefromserialsemimagesusingamaskedautoencoder