Cargando…
Efficient end-to-end learning for cell segmentation with machine generated weak annotations
Automated cell segmentation from optical microscopy images is usually the first step in the pipeline of single-cell analysis. Recently, deep-learning based algorithms have shown superior performances for the cell segmentation tasks. However, a disadvantage of deep-learning is the requirement for a l...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9981753/ https://www.ncbi.nlm.nih.gov/pubmed/36864076 http://dx.doi.org/10.1038/s42003-023-04608-5 |
_version_ | 1784900176440197120 |
---|---|
author | Shrestha, Prem Kuang, Nicholas Yu, Ji |
author_facet | Shrestha, Prem Kuang, Nicholas Yu, Ji |
author_sort | Shrestha, Prem |
collection | PubMed |
description | Automated cell segmentation from optical microscopy images is usually the first step in the pipeline of single-cell analysis. Recently, deep-learning based algorithms have shown superior performances for the cell segmentation tasks. However, a disadvantage of deep-learning is the requirement for a large amount of fully annotated training data, which is costly to generate. Weakly-supervised and self-supervised learning is an active research area, but often the model accuracy is inversely correlated with the amount of annotation information provided. Here we focus on a specific subtype of weak annotations, which can be generated programmably from experimental data, thus allowing for more annotation information content without sacrificing the annotation speed. We designed a new model architecture for end-to-end training using such incomplete annotations. We have benchmarked our method on a variety of publicly available datasets, covering both fluorescence and bright-field imaging modality. We additionally tested our method on a microscopy dataset generated by us, using machine-generated annotations. The results demonstrated that our models trained under weak supervision can achieve segmentation accuracy competitive to, and in some cases, surpassing, state-of-the-art models trained under full supervision. Therefore, our method can be a practical alternative to the established full-supervision methods. |
format | Online Article Text |
id | pubmed-9981753 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-99817532023-03-04 Efficient end-to-end learning for cell segmentation with machine generated weak annotations Shrestha, Prem Kuang, Nicholas Yu, Ji Commun Biol Article Automated cell segmentation from optical microscopy images is usually the first step in the pipeline of single-cell analysis. Recently, deep-learning based algorithms have shown superior performances for the cell segmentation tasks. However, a disadvantage of deep-learning is the requirement for a large amount of fully annotated training data, which is costly to generate. Weakly-supervised and self-supervised learning is an active research area, but often the model accuracy is inversely correlated with the amount of annotation information provided. Here we focus on a specific subtype of weak annotations, which can be generated programmably from experimental data, thus allowing for more annotation information content without sacrificing the annotation speed. We designed a new model architecture for end-to-end training using such incomplete annotations. We have benchmarked our method on a variety of publicly available datasets, covering both fluorescence and bright-field imaging modality. We additionally tested our method on a microscopy dataset generated by us, using machine-generated annotations. The results demonstrated that our models trained under weak supervision can achieve segmentation accuracy competitive to, and in some cases, surpassing, state-of-the-art models trained under full supervision. Therefore, our method can be a practical alternative to the established full-supervision methods. Nature Publishing Group UK 2023-03-02 /pmc/articles/PMC9981753/ /pubmed/36864076 http://dx.doi.org/10.1038/s42003-023-04608-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Shrestha, Prem Kuang, Nicholas Yu, Ji Efficient end-to-end learning for cell segmentation with machine generated weak annotations |
title | Efficient end-to-end learning for cell segmentation with machine generated weak annotations |
title_full | Efficient end-to-end learning for cell segmentation with machine generated weak annotations |
title_fullStr | Efficient end-to-end learning for cell segmentation with machine generated weak annotations |
title_full_unstemmed | Efficient end-to-end learning for cell segmentation with machine generated weak annotations |
title_short | Efficient end-to-end learning for cell segmentation with machine generated weak annotations |
title_sort | efficient end-to-end learning for cell segmentation with machine generated weak annotations |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9981753/ https://www.ncbi.nlm.nih.gov/pubmed/36864076 http://dx.doi.org/10.1038/s42003-023-04608-5 |
work_keys_str_mv | AT shresthaprem efficientendtoendlearningforcellsegmentationwithmachinegeneratedweakannotations AT kuangnicholas efficientendtoendlearningforcellsegmentationwithmachinegeneratedweakannotations AT yuji efficientendtoendlearningforcellsegmentationwithmachinegeneratedweakannotations |