Cargando…

Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging

Complex intracellular organizations are commonly represented by dividing the metabolic process of cells into different organelles. Therefore, identifying sub-cellular organelle architecture is significant for understanding intracellular structural properties, specific functions, and biological proce...

Descripción completa

Detalles Bibliográficos
Autores principales: Wei, Zhihao, Liu, Xi, Yan, Ruiqing, Sun, Guocheng, Yu, Weiyong, Liu, Qiang, Guo, Qianjin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9644055/
https://www.ncbi.nlm.nih.gov/pubmed/36386823
http://dx.doi.org/10.3389/fgene.2022.1002327
_version_ 1784826663263010816
author Wei, Zhihao
Liu, Xi
Yan, Ruiqing
Sun, Guocheng
Yu, Weiyong
Liu, Qiang
Guo, Qianjin
author_facet Wei, Zhihao
Liu, Xi
Yan, Ruiqing
Sun, Guocheng
Yu, Weiyong
Liu, Qiang
Guo, Qianjin
author_sort Wei, Zhihao
collection PubMed
description Complex intracellular organizations are commonly represented by dividing the metabolic process of cells into different organelles. Therefore, identifying sub-cellular organelle architecture is significant for understanding intracellular structural properties, specific functions, and biological processes in cells. However, the discrimination of these structures in the natural organizational environment and their functional consequences are not clear. In this article, we propose a new pixel-level multimodal fusion (PLMF) deep network which can be used to predict the location of cellular organelle using label-free cell optical microscopy images followed by deep-learning-based automated image denoising. It provides valuable insights that can be of tremendous help in improving the specificity of label-free cell optical microscopy by using the Transformer–Unet network to predict the ground truth imaging which corresponds to different sub-cellular organelle architectures. The new prediction method proposed in this article combines the advantages of a transformer’s global prediction and CNN’s local detail analytic ability of background features for label-free cell optical microscopy images, so as to improve the prediction accuracy. Our experimental results showed that the PLMF network can achieve over 0.91 Pearson’s correlation coefficient (PCC) correlation between estimated and true fractions on lung cancer cell-imaging datasets. In addition, we applied the PLMF network method on the cell images for label-free prediction of several different subcellular components simultaneously, rather than using several fluorescent labels. These results open up a new way for the time-resolved study of subcellular components in different cells, especially for cancer cells.
format Online
Article
Text
id pubmed-9644055
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-96440552022-11-15 Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging Wei, Zhihao Liu, Xi Yan, Ruiqing Sun, Guocheng Yu, Weiyong Liu, Qiang Guo, Qianjin Front Genet Genetics Complex intracellular organizations are commonly represented by dividing the metabolic process of cells into different organelles. Therefore, identifying sub-cellular organelle architecture is significant for understanding intracellular structural properties, specific functions, and biological processes in cells. However, the discrimination of these structures in the natural organizational environment and their functional consequences are not clear. In this article, we propose a new pixel-level multimodal fusion (PLMF) deep network which can be used to predict the location of cellular organelle using label-free cell optical microscopy images followed by deep-learning-based automated image denoising. It provides valuable insights that can be of tremendous help in improving the specificity of label-free cell optical microscopy by using the Transformer–Unet network to predict the ground truth imaging which corresponds to different sub-cellular organelle architectures. The new prediction method proposed in this article combines the advantages of a transformer’s global prediction and CNN’s local detail analytic ability of background features for label-free cell optical microscopy images, so as to improve the prediction accuracy. Our experimental results showed that the PLMF network can achieve over 0.91 Pearson’s correlation coefficient (PCC) correlation between estimated and true fractions on lung cancer cell-imaging datasets. In addition, we applied the PLMF network method on the cell images for label-free prediction of several different subcellular components simultaneously, rather than using several fluorescent labels. These results open up a new way for the time-resolved study of subcellular components in different cells, especially for cancer cells. Frontiers Media S.A. 2022-10-26 /pmc/articles/PMC9644055/ /pubmed/36386823 http://dx.doi.org/10.3389/fgene.2022.1002327 Text en Copyright © 2022 Wei, Liu, Yan, Sun, Yu, Liu and Guo. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Genetics
Wei, Zhihao
Liu, Xi
Yan, Ruiqing
Sun, Guocheng
Yu, Weiyong
Liu, Qiang
Guo, Qianjin
Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_full Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_fullStr Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_full_unstemmed Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_short Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_sort pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
topic Genetics
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9644055/
https://www.ncbi.nlm.nih.gov/pubmed/36386823
http://dx.doi.org/10.3389/fgene.2022.1002327
work_keys_str_mv AT weizhihao pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT liuxi pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT yanruiqing pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT sunguocheng pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT yuweiyong pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT liuqiang pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT guoqianjin pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging