Cargando…
Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping
The rise of self-supervised learning (SSL) methods in recent years presents an opportunity to leverage unlabeled and domain-specific datasets generated by image-based plant phenotyping platforms to accelerate plant breeding programs. Despite the surge of research on SSL, there has been a scarcity of...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
AAAS
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10079263/ https://www.ncbi.nlm.nih.gov/pubmed/37040288 http://dx.doi.org/10.34133/plantphenomics.0037 |
_version_ | 1785020692685651968 |
---|---|
author | Ogidi, Franklin C. Eramian, Mark G. Stavness, Ian |
author_facet | Ogidi, Franklin C. Eramian, Mark G. Stavness, Ian |
author_sort | Ogidi, Franklin C. |
collection | PubMed |
description | The rise of self-supervised learning (SSL) methods in recent years presents an opportunity to leverage unlabeled and domain-specific datasets generated by image-based plant phenotyping platforms to accelerate plant breeding programs. Despite the surge of research on SSL, there has been a scarcity of research exploring the applications of SSL to image-based plant phenotyping tasks, particularly detection and counting tasks. We address this gap by benchmarking the performance of 2 SSL methods—momentum contrast (MoCo) v2 and dense contrastive learning (DenseCL)—against the conventional supervised learning method when transferring learned representations to 4 downstream (target) image-based plant phenotyping tasks: wheat head detection, plant instance detection, wheat spikelet counting, and leaf counting. We studied the effects of the domain of the pretraining (source) dataset on the downstream performance and the influence of redundancy in the pretraining dataset on the quality of learned representations. We also analyzed the similarity of the internal representations learned via the different pretraining methods. We find that supervised pretraining generally outperforms self-supervised pretraining and show that MoCo v2 and DenseCL learn different high-level representations compared to the supervised method. We also find that using a diverse source dataset in the same domain as or a similar domain to the target dataset maximizes performance in the downstream task. Finally, our results show that SSL methods may be more sensitive to redundancy in the pretraining dataset than the supervised pretraining method. We hope that this benchmark/evaluation study will guide practitioners in developing better SSL methods for image-based plant phenotyping. |
format | Online Article Text |
id | pubmed-10079263 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | AAAS |
record_format | MEDLINE/PubMed |
spelling | pubmed-100792632023-04-07 Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping Ogidi, Franklin C. Eramian, Mark G. Stavness, Ian Plant Phenomics Research Article The rise of self-supervised learning (SSL) methods in recent years presents an opportunity to leverage unlabeled and domain-specific datasets generated by image-based plant phenotyping platforms to accelerate plant breeding programs. Despite the surge of research on SSL, there has been a scarcity of research exploring the applications of SSL to image-based plant phenotyping tasks, particularly detection and counting tasks. We address this gap by benchmarking the performance of 2 SSL methods—momentum contrast (MoCo) v2 and dense contrastive learning (DenseCL)—against the conventional supervised learning method when transferring learned representations to 4 downstream (target) image-based plant phenotyping tasks: wheat head detection, plant instance detection, wheat spikelet counting, and leaf counting. We studied the effects of the domain of the pretraining (source) dataset on the downstream performance and the influence of redundancy in the pretraining dataset on the quality of learned representations. We also analyzed the similarity of the internal representations learned via the different pretraining methods. We find that supervised pretraining generally outperforms self-supervised pretraining and show that MoCo v2 and DenseCL learn different high-level representations compared to the supervised method. We also find that using a diverse source dataset in the same domain as or a similar domain to the target dataset maximizes performance in the downstream task. Finally, our results show that SSL methods may be more sensitive to redundancy in the pretraining dataset than the supervised pretraining method. We hope that this benchmark/evaluation study will guide practitioners in developing better SSL methods for image-based plant phenotyping. AAAS 2023-04-03 2023 /pmc/articles/PMC10079263/ /pubmed/37040288 http://dx.doi.org/10.34133/plantphenomics.0037 Text en https://creativecommons.org/licenses/by/4.0/Exclusive Licensee Nanjing Agricultural University. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution License 4.0 (CC BY 4.0) (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Research Article Ogidi, Franklin C. Eramian, Mark G. Stavness, Ian Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping |
title | Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping |
title_full | Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping |
title_fullStr | Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping |
title_full_unstemmed | Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping |
title_short | Benchmarking Self-Supervised Contrastive Learning Methods for Image-Based Plant Phenotyping |
title_sort | benchmarking self-supervised contrastive learning methods for image-based plant phenotyping |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10079263/ https://www.ncbi.nlm.nih.gov/pubmed/37040288 http://dx.doi.org/10.34133/plantphenomics.0037 |
work_keys_str_mv | AT ogidifranklinc benchmarkingselfsupervisedcontrastivelearningmethodsforimagebasedplantphenotyping AT eramianmarkg benchmarkingselfsupervisedcontrastivelearningmethodsforimagebasedplantphenotyping AT stavnessian benchmarkingselfsupervisedcontrastivelearningmethodsforimagebasedplantphenotyping |