Cargando…
Fully-automated root image analysis (faRIA)
High-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advan...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8346561/ https://www.ncbi.nlm.nih.gov/pubmed/34362967 http://dx.doi.org/10.1038/s41598-021-95480-y |
_version_ | 1783734902499311616 |
---|---|
author | Narisetti, Narendra Henke, Michael Seiler, Christiane Junker, Astrid Ostermann, Jörn Altmann, Thomas Gladilin, Evgeny |
author_facet | Narisetti, Narendra Henke, Michael Seiler, Christiane Junker, Astrid Ostermann, Jörn Altmann, Thomas Gladilin, Evgeny |
author_sort | Narisetti, Narendra |
collection | PubMed |
description | High-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advanced methods for automated image segmentation are required. Due to often unavoidable overlap between the intensity of fore- and background regions simple thresholding methods are, generally, not suitable for the segmentation of root regions. Higher-level cognitive models such as convolutional neural networks (CNN) provide capabilities for segmenting roots from heterogeneous and noisy background structures, however, they require a representative set of manually segmented (ground truth) images. Here, we present a GUI-based tool for fully automated quantitative analysis of root images using a pre-trained CNN model, which relies on an extension of the U-Net architecture. The developed CNN framework was designed to efficiently segment root structures of different size, shape and optical contrast using low budget hardware systems. The CNN model was trained on a set of 6465 masks derived from 182 manually segmented near-infrared (NIR) maize root images. Our experimental results show that the proposed approach achieves a Dice coefficient of 0.87 and outperforms existing tools (e.g., SegRoot) with Dice coefficient of 0.67 by application not only to NIR but also to other imaging modalities and plant species such as barley and arabidopsis soil-root images from LED-rhizotron and UV imaging systems, respectively. In summary, the developed software framework enables users to efficiently analyse soil-root images in an automated manner (i.e. without manual interaction with data and/or parameter tuning) providing quantitative plant scientists with a powerful analytical tool. |
format | Online Article Text |
id | pubmed-8346561 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-83465612021-08-10 Fully-automated root image analysis (faRIA) Narisetti, Narendra Henke, Michael Seiler, Christiane Junker, Astrid Ostermann, Jörn Altmann, Thomas Gladilin, Evgeny Sci Rep Article High-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advanced methods for automated image segmentation are required. Due to often unavoidable overlap between the intensity of fore- and background regions simple thresholding methods are, generally, not suitable for the segmentation of root regions. Higher-level cognitive models such as convolutional neural networks (CNN) provide capabilities for segmenting roots from heterogeneous and noisy background structures, however, they require a representative set of manually segmented (ground truth) images. Here, we present a GUI-based tool for fully automated quantitative analysis of root images using a pre-trained CNN model, which relies on an extension of the U-Net architecture. The developed CNN framework was designed to efficiently segment root structures of different size, shape and optical contrast using low budget hardware systems. The CNN model was trained on a set of 6465 masks derived from 182 manually segmented near-infrared (NIR) maize root images. Our experimental results show that the proposed approach achieves a Dice coefficient of 0.87 and outperforms existing tools (e.g., SegRoot) with Dice coefficient of 0.67 by application not only to NIR but also to other imaging modalities and plant species such as barley and arabidopsis soil-root images from LED-rhizotron and UV imaging systems, respectively. In summary, the developed software framework enables users to efficiently analyse soil-root images in an automated manner (i.e. without manual interaction with data and/or parameter tuning) providing quantitative plant scientists with a powerful analytical tool. Nature Publishing Group UK 2021-08-06 /pmc/articles/PMC8346561/ /pubmed/34362967 http://dx.doi.org/10.1038/s41598-021-95480-y Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Narisetti, Narendra Henke, Michael Seiler, Christiane Junker, Astrid Ostermann, Jörn Altmann, Thomas Gladilin, Evgeny Fully-automated root image analysis (faRIA) |
title | Fully-automated root image analysis (faRIA) |
title_full | Fully-automated root image analysis (faRIA) |
title_fullStr | Fully-automated root image analysis (faRIA) |
title_full_unstemmed | Fully-automated root image analysis (faRIA) |
title_short | Fully-automated root image analysis (faRIA) |
title_sort | fully-automated root image analysis (faria) |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8346561/ https://www.ncbi.nlm.nih.gov/pubmed/34362967 http://dx.doi.org/10.1038/s41598-021-95480-y |
work_keys_str_mv | AT narisettinarendra fullyautomatedrootimageanalysisfaria AT henkemichael fullyautomatedrootimageanalysisfaria AT seilerchristiane fullyautomatedrootimageanalysisfaria AT junkerastrid fullyautomatedrootimageanalysisfaria AT ostermannjorn fullyautomatedrootimageanalysisfaria AT altmannthomas fullyautomatedrootimageanalysisfaria AT gladilinevgeny fullyautomatedrootimageanalysisfaria |