Cargando…

A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping

BACKGROUND: Automated segmentation of large amount of image data is one of the major bottlenecks in high-throughput plant phenotyping. Dynamic optical appearance of developing plants, inhomogeneous scene illumination, shadows and reflections in plant and background regions complicate automated segme...

Descripción completa

Detalles Bibliográficos
Autores principales: Henke, Michael, Junker, Astrid, Neumann, Kerstin, Altmann, Thomas, Gladilin, Evgeny
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7346525/
https://www.ncbi.nlm.nih.gov/pubmed/32670387
http://dx.doi.org/10.1186/s13007-020-00637-x
_version_ 1783556425573728256
author Henke, Michael
Junker, Astrid
Neumann, Kerstin
Altmann, Thomas
Gladilin, Evgeny
author_facet Henke, Michael
Junker, Astrid
Neumann, Kerstin
Altmann, Thomas
Gladilin, Evgeny
author_sort Henke, Michael
collection PubMed
description BACKGROUND: Automated segmentation of large amount of image data is one of the major bottlenecks in high-throughput plant phenotyping. Dynamic optical appearance of developing plants, inhomogeneous scene illumination, shadows and reflections in plant and background regions complicate automated segmentation of unimodal plant images. To overcome the problem of ambiguous color information in unimodal data, images of different modalities can be combined to a virtual multispectral cube. However, due to motion artefacts caused by the relocation of plants between photochambers the alignment of multimodal images is often compromised by blurring artifacts. RESULTS: Here, we present an approach to automated segmentation of greenhouse plant images which is based on co-registration of fluorescence (FLU) and of visible light (VIS) camera images followed by subsequent separation of plant and marginal background regions using different species- and camera view-tailored classification models. Our experimental results including a direct comparison with manually segmented ground truth data show that images of different plant types acquired at different developmental stages from different camera views can be automatically segmented with the average accuracy of [Formula: see text] ([Formula: see text] ) using our two-step registration-classification approach. CONCLUSION: Automated segmentation of arbitrary greenhouse images exhibiting highly variable optical plant and background appearance represents a challenging task to data classification techniques that rely on detection of invariances. To overcome the limitation of unimodal image analysis, a two-step registration-classification approach to combined analysis of fluorescent and visible light images was developed. Our experimental results show that this algorithmic approach enables accurate segmentation of different FLU/VIS plant images suitable for application in fully automated high-throughput manner.
format Online
Article
Text
id pubmed-7346525
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-73465252020-07-14 A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping Henke, Michael Junker, Astrid Neumann, Kerstin Altmann, Thomas Gladilin, Evgeny Plant Methods Methodology BACKGROUND: Automated segmentation of large amount of image data is one of the major bottlenecks in high-throughput plant phenotyping. Dynamic optical appearance of developing plants, inhomogeneous scene illumination, shadows and reflections in plant and background regions complicate automated segmentation of unimodal plant images. To overcome the problem of ambiguous color information in unimodal data, images of different modalities can be combined to a virtual multispectral cube. However, due to motion artefacts caused by the relocation of plants between photochambers the alignment of multimodal images is often compromised by blurring artifacts. RESULTS: Here, we present an approach to automated segmentation of greenhouse plant images which is based on co-registration of fluorescence (FLU) and of visible light (VIS) camera images followed by subsequent separation of plant and marginal background regions using different species- and camera view-tailored classification models. Our experimental results including a direct comparison with manually segmented ground truth data show that images of different plant types acquired at different developmental stages from different camera views can be automatically segmented with the average accuracy of [Formula: see text] ([Formula: see text] ) using our two-step registration-classification approach. CONCLUSION: Automated segmentation of arbitrary greenhouse images exhibiting highly variable optical plant and background appearance represents a challenging task to data classification techniques that rely on detection of invariances. To overcome the limitation of unimodal image analysis, a two-step registration-classification approach to combined analysis of fluorescent and visible light images was developed. Our experimental results show that this algorithmic approach enables accurate segmentation of different FLU/VIS plant images suitable for application in fully automated high-throughput manner. BioMed Central 2020-07-09 /pmc/articles/PMC7346525/ /pubmed/32670387 http://dx.doi.org/10.1186/s13007-020-00637-x Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Methodology
Henke, Michael
Junker, Astrid
Neumann, Kerstin
Altmann, Thomas
Gladilin, Evgeny
A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping
title A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping
title_full A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping
title_fullStr A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping
title_full_unstemmed A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping
title_short A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping
title_sort two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping
topic Methodology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7346525/
https://www.ncbi.nlm.nih.gov/pubmed/32670387
http://dx.doi.org/10.1186/s13007-020-00637-x
work_keys_str_mv AT henkemichael atwostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT junkerastrid atwostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT neumannkerstin atwostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT altmannthomas atwostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT gladilinevgeny atwostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT henkemichael twostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT junkerastrid twostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT neumannkerstin twostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT altmannthomas twostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping
AT gladilinevgeny twostepregistrationclassificationapproachtoautomatedsegmentationofmultimodalimagesforhighthroughputgreenhouseplantphenotyping