Cargando…

Learning to see colours: Biologically relevant virtual staining for adipocyte cell images

Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as tho...

Descripción completa

Detalles Bibliográficos
Autores principales: Wieslander, Håkan, Gupta, Ankit, Bergman, Ebba, Hallström, Erik, Harrison, Philip John
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8519425/
https://www.ncbi.nlm.nih.gov/pubmed/34653209
http://dx.doi.org/10.1371/journal.pone.0258546
_version_ 1784584446158045184
author Wieslander, Håkan
Gupta, Ankit
Bergman, Ebba
Hallström, Erik
Harrison, Philip John
author_facet Wieslander, Håkan
Gupta, Ankit
Bergman, Ebba
Hallström, Erik
Harrison, Philip John
author_sort Wieslander, Håkan
collection PubMed
description Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images using virtual staining (also known as “label-free prediction” and “in-silico labeling”) can get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels.
format Online
Article
Text
id pubmed-8519425
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-85194252021-10-16 Learning to see colours: Biologically relevant virtual staining for adipocyte cell images Wieslander, Håkan Gupta, Ankit Bergman, Ebba Hallström, Erik Harrison, Philip John PLoS One Research Article Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images using virtual staining (also known as “label-free prediction” and “in-silico labeling”) can get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels. Public Library of Science 2021-10-15 /pmc/articles/PMC8519425/ /pubmed/34653209 http://dx.doi.org/10.1371/journal.pone.0258546 Text en © 2021 Wieslander et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Wieslander, Håkan
Gupta, Ankit
Bergman, Ebba
Hallström, Erik
Harrison, Philip John
Learning to see colours: Biologically relevant virtual staining for adipocyte cell images
title Learning to see colours: Biologically relevant virtual staining for adipocyte cell images
title_full Learning to see colours: Biologically relevant virtual staining for adipocyte cell images
title_fullStr Learning to see colours: Biologically relevant virtual staining for adipocyte cell images
title_full_unstemmed Learning to see colours: Biologically relevant virtual staining for adipocyte cell images
title_short Learning to see colours: Biologically relevant virtual staining for adipocyte cell images
title_sort learning to see colours: biologically relevant virtual staining for adipocyte cell images
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8519425/
https://www.ncbi.nlm.nih.gov/pubmed/34653209
http://dx.doi.org/10.1371/journal.pone.0258546
work_keys_str_mv AT wieslanderhakan learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
AT guptaankit learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
AT bergmanebba learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
AT hallstromerik learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages
AT harrisonphilipjohn learningtoseecoloursbiologicallyrelevantvirtualstainingforadipocytecellimages