Cargando…

A deep learning generative model approach for image synthesis of plant leaves

OBJECTIVES: A well-known drawback to the implementation of Convolutional Neural Networks (CNNs) for image-recognition is the intensive annotation effort for large enough training dataset, that can become prohibitive in several applications. In this study we focus on applications in the agricultural...

Descripción completa

Detalles Bibliográficos
Autores principales: Benfenati, Alessandro, Bolzi, Davide, Causin, Paola, Oberti, Roberto
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9674145/
https://www.ncbi.nlm.nih.gov/pubmed/36399435
http://dx.doi.org/10.1371/journal.pone.0276972
Descripción
Sumario:OBJECTIVES: A well-known drawback to the implementation of Convolutional Neural Networks (CNNs) for image-recognition is the intensive annotation effort for large enough training dataset, that can become prohibitive in several applications. In this study we focus on applications in the agricultural domain and we implement Deep Learning (DL) techniques for the automatic generation of meaningful synthetic images of plant leaves, which can be used as a virtually unlimited dataset to train or validate specialized CNN models or other image-recognition algorithms. METHODS: Following an approach based on DL generative models, we introduce a Leaf-to-Leaf Translation (L2L) algorithm, able to produce collections of novel synthetic images in two steps: first, a residual variational autoencoder architecture is used to generate novel synthetic leaf skeletons geometry, starting from binarized skeletons obtained from real leaf images. Second, a translation via Pix2pix framework based on conditional generator adversarial networks (cGANs) reproduces the color distribution of the leaf surface, by preserving the underneath venation pattern and leaf shape. RESULTS: The L2L algorithm generates synthetic images of leaves with meaningful and realistic appearance, indicating that it can significantly contribute to expand a small dataset of real images. The performance was assessed qualitatively and quantitatively, by employing a DL anomaly detection strategy which quantifies the anomaly degree of synthetic leaves with respect to real samples. Finally, as an illustrative example, the proposed L2L algorithm was used for generating a set of synthetic images of healthy end diseased cucumber leaves aimed at training a CNN model for automatic detection of disease symptoms. CONCLUSIONS: Generative DL approaches have the potential to be a new paradigm to provide low-cost meaningful synthetic samples. Our focus was to dispose of synthetic leaves images for smart agriculture applications but, more in general, they can serve for all computer-aided applications which require the representation of vegetation. The present L2L approach represents a step towards this goal, being able to generate synthetic samples with a relevant qualitative and quantitative resemblance to real leaves.