Cargando…

Improving the Neural Segmentation of Blurry Serial SEM Images by Blind Deblurring

Serial scanning electron microscopy (sSEM) has recently been developed to reconstruct complex largescale neural connectomes, through learning-based instance segmentation. However, blurry images are inevitable amid prolonged automated data acquisition due to imprecision in autofocusing and autostigma...

Descripción completa

Detalles Bibliográficos
Autores principales: Cheng, Ao, Kang, Kai, Zhu, Zhanpeng, Zhang, Ruobing, Wang, Lirong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9879678/
https://www.ncbi.nlm.nih.gov/pubmed/36711194
http://dx.doi.org/10.1155/2023/8936903
Descripción
Sumario:Serial scanning electron microscopy (sSEM) has recently been developed to reconstruct complex largescale neural connectomes, through learning-based instance segmentation. However, blurry images are inevitable amid prolonged automated data acquisition due to imprecision in autofocusing and autostigmation, which impose a great challenge to accurate segmentation of the massive sSEM image data. Recently, learning-based methods, such as adversarial learning and supervised learning, have been proven to be effective for blind EM image deblurring. However, in practice, these methods suffer from the limited training dataset and the underrepresentation of high-resolution decoded features. Here, we propose a semisupervised learning guided progressive decoding network (SGPN) to exploit unlabeled blurry images for training and progressively enrich high-resolution feature representation. The proposed method outperforms the latest deblurring models on real SEM images with much less ground truth input. The improvement of the PSNR and SSIM is 1.04 dB and 0.086, respectively. We then trained segmentation models with deblurred datasets and demonstrated significant improvement in segmentation accuracy. The A-rand (Bogovic et al. 2013) decreased by 0.119 and 0.026, respectively, for 2D and 3D segmentation.