Cargando…

Sparse self-attention aggregation networks for neural sequence slice interpolation

BACKGROUND: Microscopic imaging is a crucial technology for visualizing neural and tissue structures. Large-area defects inevitably occur during the imaging process of electron microscope (EM) serial slices, which lead to reduced registration and semantic segmentation, and affect the accuracy of 3D...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Zejin, Liu, Jing, Chen, Xi, Li, Guoqing, Han, Hua
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7852179/
https://www.ncbi.nlm.nih.gov/pubmed/33522940
http://dx.doi.org/10.1186/s13040-021-00236-z
_version_ 1783645769747660800
author Wang, Zejin
Liu, Jing
Chen, Xi
Li, Guoqing
Han, Hua
author_facet Wang, Zejin
Liu, Jing
Chen, Xi
Li, Guoqing
Han, Hua
author_sort Wang, Zejin
collection PubMed
description BACKGROUND: Microscopic imaging is a crucial technology for visualizing neural and tissue structures. Large-area defects inevitably occur during the imaging process of electron microscope (EM) serial slices, which lead to reduced registration and semantic segmentation, and affect the accuracy of 3D reconstruction. The continuity of biological tissue among serial EM images makes it possible to recover missing tissues utilizing inter-slice interpolation. However, large deformation, noise, and blur among EM images remain the task challenging. Existing flow-based and kernel-based methods have to perform frame interpolation on images with little noise and low blur. They also cannot effectively deal with large deformations on EM images. RESULTS: In this paper, we propose a sparse self-attention aggregation network to synthesize pixels following the continuity of biological tissue. First, we develop an attention-aware layer for consecutive EM images interpolation that implicitly adopts global perceptual deformation. Second, we present an adaptive style-balance loss taking the style differences of serial EM images such as blur and noise into consideration. Guided by the attention-aware module, adaptively synthesizing each pixel aggregated from the global domain further improves the performance of pixel synthesis. Quantitative and qualitative experiments show that the proposed method is superior to the state-of-the-art approaches. CONCLUSIONS: The proposed method can be considered as an effective strategy to model the relationship between each pixel and other pixels from the global domain. This approach improves the algorithm’s robustness to noise and large deformation, and can accurately predict the effective information of the missing region, which will greatly promote the data analysis of neurobiological research.
format Online
Article
Text
id pubmed-7852179
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-78521792021-02-03 Sparse self-attention aggregation networks for neural sequence slice interpolation Wang, Zejin Liu, Jing Chen, Xi Li, Guoqing Han, Hua BioData Min Research BACKGROUND: Microscopic imaging is a crucial technology for visualizing neural and tissue structures. Large-area defects inevitably occur during the imaging process of electron microscope (EM) serial slices, which lead to reduced registration and semantic segmentation, and affect the accuracy of 3D reconstruction. The continuity of biological tissue among serial EM images makes it possible to recover missing tissues utilizing inter-slice interpolation. However, large deformation, noise, and blur among EM images remain the task challenging. Existing flow-based and kernel-based methods have to perform frame interpolation on images with little noise and low blur. They also cannot effectively deal with large deformations on EM images. RESULTS: In this paper, we propose a sparse self-attention aggregation network to synthesize pixels following the continuity of biological tissue. First, we develop an attention-aware layer for consecutive EM images interpolation that implicitly adopts global perceptual deformation. Second, we present an adaptive style-balance loss taking the style differences of serial EM images such as blur and noise into consideration. Guided by the attention-aware module, adaptively synthesizing each pixel aggregated from the global domain further improves the performance of pixel synthesis. Quantitative and qualitative experiments show that the proposed method is superior to the state-of-the-art approaches. CONCLUSIONS: The proposed method can be considered as an effective strategy to model the relationship between each pixel and other pixels from the global domain. This approach improves the algorithm’s robustness to noise and large deformation, and can accurately predict the effective information of the missing region, which will greatly promote the data analysis of neurobiological research. BioMed Central 2021-02-01 /pmc/articles/PMC7852179/ /pubmed/33522940 http://dx.doi.org/10.1186/s13040-021-00236-z Text en © The Author(s) 2021 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Wang, Zejin
Liu, Jing
Chen, Xi
Li, Guoqing
Han, Hua
Sparse self-attention aggregation networks for neural sequence slice interpolation
title Sparse self-attention aggregation networks for neural sequence slice interpolation
title_full Sparse self-attention aggregation networks for neural sequence slice interpolation
title_fullStr Sparse self-attention aggregation networks for neural sequence slice interpolation
title_full_unstemmed Sparse self-attention aggregation networks for neural sequence slice interpolation
title_short Sparse self-attention aggregation networks for neural sequence slice interpolation
title_sort sparse self-attention aggregation networks for neural sequence slice interpolation
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7852179/
https://www.ncbi.nlm.nih.gov/pubmed/33522940
http://dx.doi.org/10.1186/s13040-021-00236-z
work_keys_str_mv AT wangzejin sparseselfattentionaggregationnetworksforneuralsequencesliceinterpolation
AT liujing sparseselfattentionaggregationnetworksforneuralsequencesliceinterpolation
AT chenxi sparseselfattentionaggregationnetworksforneuralsequencesliceinterpolation
AT liguoqing sparseselfattentionaggregationnetworksforneuralsequencesliceinterpolation
AT hanhua sparseselfattentionaggregationnetworksforneuralsequencesliceinterpolation