Cargando…
Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet
Counting cells in fluorescent microscopy is a tedious, time-consuming task that researchers have to accomplish to assess the effects of different experimental conditions on biological structures of interest. Although such objects are generally easy to identify, the process of manually annotating cel...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8617067/ https://www.ncbi.nlm.nih.gov/pubmed/34824294 http://dx.doi.org/10.1038/s41598-021-01929-5 |
_version_ | 1784604464603201536 |
---|---|
author | Morelli, Roberto Clissa, Luca Amici, Roberto Cerri, Matteo Hitrec, Timna Luppi, Marco Rinaldi, Lorenzo Squarcio, Fabio Zoccoli, Antonio |
author_facet | Morelli, Roberto Clissa, Luca Amici, Roberto Cerri, Matteo Hitrec, Timna Luppi, Marco Rinaldi, Lorenzo Squarcio, Fabio Zoccoli, Antonio |
author_sort | Morelli, Roberto |
collection | PubMed |
description | Counting cells in fluorescent microscopy is a tedious, time-consuming task that researchers have to accomplish to assess the effects of different experimental conditions on biological structures of interest. Although such objects are generally easy to identify, the process of manually annotating cells is sometimes subject to fatigue errors and suffers from arbitrariness due to the operator’s interpretation of the borderline cases. We propose a Deep Learning approach that exploits a fully-convolutional network in a binary segmentation fashion to localize the objects of interest. Counts are then retrieved as the number of detected items. Specifically, we introduce a Unet-like architecture, cell ResUnet (c-ResUnet), and compare its performance against 3 similar architectures. In addition, we evaluate through ablation studies the impact of two design choices, (i) artifacts oversampling and (ii) weight maps that penalize the errors on cells boundaries increasingly with overcrowding. In summary, the c-ResUnet outperforms the competitors with respect to both detection and counting metrics (respectively, [Formula: see text] score = 0.81 and MAE = 3.09). Also, the introduction of weight maps contribute to enhance performances, especially in presence of clumping cells, artifacts and confounding biological structures. Posterior qualitative assessment by domain experts corroborates previous results, suggesting human-level performance inasmuch even erroneous predictions seem to fall within the limits of operator interpretation. Finally, we release the pre-trained model and the annotated dataset to foster research in this and related fields. |
format | Online Article Text |
id | pubmed-8617067 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-86170672021-11-29 Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet Morelli, Roberto Clissa, Luca Amici, Roberto Cerri, Matteo Hitrec, Timna Luppi, Marco Rinaldi, Lorenzo Squarcio, Fabio Zoccoli, Antonio Sci Rep Article Counting cells in fluorescent microscopy is a tedious, time-consuming task that researchers have to accomplish to assess the effects of different experimental conditions on biological structures of interest. Although such objects are generally easy to identify, the process of manually annotating cells is sometimes subject to fatigue errors and suffers from arbitrariness due to the operator’s interpretation of the borderline cases. We propose a Deep Learning approach that exploits a fully-convolutional network in a binary segmentation fashion to localize the objects of interest. Counts are then retrieved as the number of detected items. Specifically, we introduce a Unet-like architecture, cell ResUnet (c-ResUnet), and compare its performance against 3 similar architectures. In addition, we evaluate through ablation studies the impact of two design choices, (i) artifacts oversampling and (ii) weight maps that penalize the errors on cells boundaries increasingly with overcrowding. In summary, the c-ResUnet outperforms the competitors with respect to both detection and counting metrics (respectively, [Formula: see text] score = 0.81 and MAE = 3.09). Also, the introduction of weight maps contribute to enhance performances, especially in presence of clumping cells, artifacts and confounding biological structures. Posterior qualitative assessment by domain experts corroborates previous results, suggesting human-level performance inasmuch even erroneous predictions seem to fall within the limits of operator interpretation. Finally, we release the pre-trained model and the annotated dataset to foster research in this and related fields. Nature Publishing Group UK 2021-11-25 /pmc/articles/PMC8617067/ /pubmed/34824294 http://dx.doi.org/10.1038/s41598-021-01929-5 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Morelli, Roberto Clissa, Luca Amici, Roberto Cerri, Matteo Hitrec, Timna Luppi, Marco Rinaldi, Lorenzo Squarcio, Fabio Zoccoli, Antonio Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet |
title | Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet |
title_full | Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet |
title_fullStr | Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet |
title_full_unstemmed | Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet |
title_short | Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet |
title_sort | automating cell counting in fluorescent microscopy through deep learning with c-resunet |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8617067/ https://www.ncbi.nlm.nih.gov/pubmed/34824294 http://dx.doi.org/10.1038/s41598-021-01929-5 |
work_keys_str_mv | AT morelliroberto automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet AT clissaluca automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet AT amiciroberto automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet AT cerrimatteo automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet AT hitrectimna automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet AT luppimarco automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet AT rinaldilorenzo automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet AT squarciofabio automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet AT zoccoliantonio automatingcellcountinginfluorescentmicroscopythroughdeeplearningwithcresunet |