3D Reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks
Millimeter-scale multi-cellular level imagers enable various applications, ranging from intraoperative surgical navigation to implantable sensors. However, the tradeoffs for miniaturization compromise resolution, making extracting 3D cell locations challenging—critical for tumor margin assessment an...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9068918/ https://www.ncbi.nlm.nih.gov/pubmed/35508477 http://dx.doi.org/10.1038/s41598-022-10886-6 |
_version_ | 1784700321715453952 |
---|---|
author | Najafiaghdam, Hossein Rabbani, Rozhan Gharia, Asmaysinh Papageorgiou, Efthymios P. Anwar, Mekhail |
author_facet | Najafiaghdam, Hossein Rabbani, Rozhan Gharia, Asmaysinh Papageorgiou, Efthymios P. Anwar, Mekhail |
author_sort | Najafiaghdam, Hossein |
collection | PubMed |
description | Millimeter-scale multi-cellular level imagers enable various applications, ranging from intraoperative surgical navigation to implantable sensors. However, the tradeoffs for miniaturization compromise resolution, making extracting 3D cell locations challenging—critical for tumor margin assessment and therapy monitoring. This work presents three machine-learning-based modules that extract spatial information from single image acquisitions using custom-made millimeter-scale imagers. The neural networks were trained on synthetically-generated (using Perlin noise) cell images. The first network is a convolutional neural network estimating the depth of a single layer of cells, the second is a deblurring module correcting for the point spread function (PSF). The final module extracts spatial information from a single image acquisition of a 3D specimen and reconstructs cross-sections, by providing a layered “map” of cell locations. The maximum depth error of the first module is 100 µm, with 87% test accuracy. The second module’s PSF correction achieves a least-square-error of only 4%. The third module generates a binary “cell” or “no cell” per-pixel labeling with an accuracy ranging from 89% to 85%. This work demonstrates the synergy between ultra-small silicon-based imagers that enable in vivo imaging but face a trade-off in spatial resolution, and the processing power of neural networks to achieve enhancements beyond conventional linear optimization techniques. |
format | Online Article Text |
id | pubmed-9068918 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-90689182022-05-05 3D Reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks Najafiaghdam, Hossein Rabbani, Rozhan Gharia, Asmaysinh Papageorgiou, Efthymios P. Anwar, Mekhail Sci Rep Article Millimeter-scale multi-cellular level imagers enable various applications, ranging from intraoperative surgical navigation to implantable sensors. However, the tradeoffs for miniaturization compromise resolution, making extracting 3D cell locations challenging—critical for tumor margin assessment and therapy monitoring. This work presents three machine-learning-based modules that extract spatial information from single image acquisitions using custom-made millimeter-scale imagers. The neural networks were trained on synthetically-generated (using Perlin noise) cell images. The first network is a convolutional neural network estimating the depth of a single layer of cells, the second is a deblurring module correcting for the point spread function (PSF). The final module extracts spatial information from a single image acquisition of a 3D specimen and reconstructs cross-sections, by providing a layered “map” of cell locations. The maximum depth error of the first module is 100 µm, with 87% test accuracy. The second module’s PSF correction achieves a least-square-error of only 4%. The third module generates a binary “cell” or “no cell” per-pixel labeling with an accuracy ranging from 89% to 85%. This work demonstrates the synergy between ultra-small silicon-based imagers that enable in vivo imaging but face a trade-off in spatial resolution, and the processing power of neural networks to achieve enhancements beyond conventional linear optimization techniques. Nature Publishing Group UK 2022-05-04 /pmc/articles/PMC9068918/ /pubmed/35508477 http://dx.doi.org/10.1038/s41598-022-10886-6 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Najafiaghdam, Hossein Rabbani, Rozhan Gharia, Asmaysinh Papageorgiou, Efthymios P. Anwar, Mekhail 3D Reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks |
title | 3D Reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks |
title_full | 3D Reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks |
title_fullStr | 3D Reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks |
title_full_unstemmed | 3D Reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks |
title_short | 3D Reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks |
title_sort | 3d reconstruction of cellular images from microfabricated imagers using fully-adaptive deep neural networks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9068918/ https://www.ncbi.nlm.nih.gov/pubmed/35508477 http://dx.doi.org/10.1038/s41598-022-10886-6 |
work_keys_str_mv | AT najafiaghdamhossein 3dreconstructionofcellularimagesfrommicrofabricatedimagersusingfullyadaptivedeepneuralnetworks AT rabbanirozhan 3dreconstructionofcellularimagesfrommicrofabricatedimagersusingfullyadaptivedeepneuralnetworks AT ghariaasmaysinh 3dreconstructionofcellularimagesfrommicrofabricatedimagersusingfullyadaptivedeepneuralnetworks AT papageorgiouefthymiosp 3dreconstructionofcellularimagesfrommicrofabricatedimagersusingfullyadaptivedeepneuralnetworks AT anwarmekhail 3dreconstructionofcellularimagesfrommicrofabricatedimagersusingfullyadaptivedeepneuralnetworks |