Cargando…
A deep learning generative model approach for image synthesis of plant leaves
OBJECTIVES: A well-known drawback to the implementation of Convolutional Neural Networks (CNNs) for image-recognition is the intensive annotation effort for large enough training dataset, that can become prohibitive in several applications. In this study we focus on applications in the agricultural...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9674145/ https://www.ncbi.nlm.nih.gov/pubmed/36399435 http://dx.doi.org/10.1371/journal.pone.0276972 |
_version_ | 1784833090811592704 |
---|---|
author | Benfenati, Alessandro Bolzi, Davide Causin, Paola Oberti, Roberto |
author_facet | Benfenati, Alessandro Bolzi, Davide Causin, Paola Oberti, Roberto |
author_sort | Benfenati, Alessandro |
collection | PubMed |
description | OBJECTIVES: A well-known drawback to the implementation of Convolutional Neural Networks (CNNs) for image-recognition is the intensive annotation effort for large enough training dataset, that can become prohibitive in several applications. In this study we focus on applications in the agricultural domain and we implement Deep Learning (DL) techniques for the automatic generation of meaningful synthetic images of plant leaves, which can be used as a virtually unlimited dataset to train or validate specialized CNN models or other image-recognition algorithms. METHODS: Following an approach based on DL generative models, we introduce a Leaf-to-Leaf Translation (L2L) algorithm, able to produce collections of novel synthetic images in two steps: first, a residual variational autoencoder architecture is used to generate novel synthetic leaf skeletons geometry, starting from binarized skeletons obtained from real leaf images. Second, a translation via Pix2pix framework based on conditional generator adversarial networks (cGANs) reproduces the color distribution of the leaf surface, by preserving the underneath venation pattern and leaf shape. RESULTS: The L2L algorithm generates synthetic images of leaves with meaningful and realistic appearance, indicating that it can significantly contribute to expand a small dataset of real images. The performance was assessed qualitatively and quantitatively, by employing a DL anomaly detection strategy which quantifies the anomaly degree of synthetic leaves with respect to real samples. Finally, as an illustrative example, the proposed L2L algorithm was used for generating a set of synthetic images of healthy end diseased cucumber leaves aimed at training a CNN model for automatic detection of disease symptoms. CONCLUSIONS: Generative DL approaches have the potential to be a new paradigm to provide low-cost meaningful synthetic samples. Our focus was to dispose of synthetic leaves images for smart agriculture applications but, more in general, they can serve for all computer-aided applications which require the representation of vegetation. The present L2L approach represents a step towards this goal, being able to generate synthetic samples with a relevant qualitative and quantitative resemblance to real leaves. |
format | Online Article Text |
id | pubmed-9674145 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-96741452022-11-19 A deep learning generative model approach for image synthesis of plant leaves Benfenati, Alessandro Bolzi, Davide Causin, Paola Oberti, Roberto PLoS One Research Article OBJECTIVES: A well-known drawback to the implementation of Convolutional Neural Networks (CNNs) for image-recognition is the intensive annotation effort for large enough training dataset, that can become prohibitive in several applications. In this study we focus on applications in the agricultural domain and we implement Deep Learning (DL) techniques for the automatic generation of meaningful synthetic images of plant leaves, which can be used as a virtually unlimited dataset to train or validate specialized CNN models or other image-recognition algorithms. METHODS: Following an approach based on DL generative models, we introduce a Leaf-to-Leaf Translation (L2L) algorithm, able to produce collections of novel synthetic images in two steps: first, a residual variational autoencoder architecture is used to generate novel synthetic leaf skeletons geometry, starting from binarized skeletons obtained from real leaf images. Second, a translation via Pix2pix framework based on conditional generator adversarial networks (cGANs) reproduces the color distribution of the leaf surface, by preserving the underneath venation pattern and leaf shape. RESULTS: The L2L algorithm generates synthetic images of leaves with meaningful and realistic appearance, indicating that it can significantly contribute to expand a small dataset of real images. The performance was assessed qualitatively and quantitatively, by employing a DL anomaly detection strategy which quantifies the anomaly degree of synthetic leaves with respect to real samples. Finally, as an illustrative example, the proposed L2L algorithm was used for generating a set of synthetic images of healthy end diseased cucumber leaves aimed at training a CNN model for automatic detection of disease symptoms. CONCLUSIONS: Generative DL approaches have the potential to be a new paradigm to provide low-cost meaningful synthetic samples. Our focus was to dispose of synthetic leaves images for smart agriculture applications but, more in general, they can serve for all computer-aided applications which require the representation of vegetation. The present L2L approach represents a step towards this goal, being able to generate synthetic samples with a relevant qualitative and quantitative resemblance to real leaves. Public Library of Science 2022-11-18 /pmc/articles/PMC9674145/ /pubmed/36399435 http://dx.doi.org/10.1371/journal.pone.0276972 Text en © 2022 Benfenati et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Benfenati, Alessandro Bolzi, Davide Causin, Paola Oberti, Roberto A deep learning generative model approach for image synthesis of plant leaves |
title | A deep learning generative model approach for image synthesis of plant leaves |
title_full | A deep learning generative model approach for image synthesis of plant leaves |
title_fullStr | A deep learning generative model approach for image synthesis of plant leaves |
title_full_unstemmed | A deep learning generative model approach for image synthesis of plant leaves |
title_short | A deep learning generative model approach for image synthesis of plant leaves |
title_sort | deep learning generative model approach for image synthesis of plant leaves |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9674145/ https://www.ncbi.nlm.nih.gov/pubmed/36399435 http://dx.doi.org/10.1371/journal.pone.0276972 |
work_keys_str_mv | AT benfenatialessandro adeeplearninggenerativemodelapproachforimagesynthesisofplantleaves AT bolzidavide adeeplearninggenerativemodelapproachforimagesynthesisofplantleaves AT causinpaola adeeplearninggenerativemodelapproachforimagesynthesisofplantleaves AT obertiroberto adeeplearninggenerativemodelapproachforimagesynthesisofplantleaves AT benfenatialessandro deeplearninggenerativemodelapproachforimagesynthesisofplantleaves AT bolzidavide deeplearninggenerativemodelapproachforimagesynthesisofplantleaves AT causinpaola deeplearninggenerativemodelapproachforimagesynthesisofplantleaves AT obertiroberto deeplearninggenerativemodelapproachforimagesynthesisofplantleaves |