Cargando…

Learning from small data: Classifying sex from retinal images via deep learning

Deep learning (DL) techniques have seen tremendous interest in medical imaging, particularly in the use of convolutional neural networks (CNNs) for the development of automated diagnostic tools. The facility of its non-invasive acquisition makes retinal fundus imaging particularly amenable to such a...

Descripción completa

Detalles Bibliográficos
Autores principales: Berk, Aaron, Ozturan, Gulcenur, Delavari, Parsa, Maberley, David, Yılmaz, Özgür, Oruc, Ipek
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10399793/
https://www.ncbi.nlm.nih.gov/pubmed/37535591
http://dx.doi.org/10.1371/journal.pone.0289211
_version_ 1785084322716319744
author Berk, Aaron
Ozturan, Gulcenur
Delavari, Parsa
Maberley, David
Yılmaz, Özgür
Oruc, Ipek
author_facet Berk, Aaron
Ozturan, Gulcenur
Delavari, Parsa
Maberley, David
Yılmaz, Özgür
Oruc, Ipek
author_sort Berk, Aaron
collection PubMed
description Deep learning (DL) techniques have seen tremendous interest in medical imaging, particularly in the use of convolutional neural networks (CNNs) for the development of automated diagnostic tools. The facility of its non-invasive acquisition makes retinal fundus imaging particularly amenable to such automated approaches. Recent work in the analysis of fundus images using CNNs relies on access to massive datasets for training and validation, composed of hundreds of thousands of images. However, data residency and data privacy restrictions stymie the applicability of this approach in medical settings where patient confidentiality is a mandate. Here, we showcase results for the performance of DL on small datasets to classify patient sex from fundus images—a trait thought not to be present or quantifiable in fundus images until recently. Specifically, we fine-tune a Resnet-152 model whose last layer has been modified to a fully-connected layer for binary classification. We carried out several experiments to assess performance in the small dataset context using one private (DOVS) and one public (ODIR) data source. Our models, developed using approximately 2500 fundus images, achieved test AUC scores of up to 0.72 (95% CI: [0.67, 0.77]). This corresponds to a mere 25% decrease in performance despite a nearly 1000-fold decrease in the dataset size compared to prior results in the literature. Our results show that binary classification, even with a hard task such as sex categorization from retinal fundus images, is possible with very small datasets. Our domain adaptation results show that models trained with one distribution of images may generalize well to an independent external source, as in the case of models trained on DOVS and tested on ODIR. Our results also show that eliminating poor quality images may hamper training of the CNN due to reducing the already small dataset size even further. Nevertheless, using high quality images may be an important factor as evidenced by superior generalizability of results in the domain adaptation experiments. Finally, our work shows that ensembling is an important tool in maximizing performance of deep CNNs in the context of small development datasets.
format Online
Article
Text
id pubmed-10399793
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-103997932023-08-04 Learning from small data: Classifying sex from retinal images via deep learning Berk, Aaron Ozturan, Gulcenur Delavari, Parsa Maberley, David Yılmaz, Özgür Oruc, Ipek PLoS One Research Article Deep learning (DL) techniques have seen tremendous interest in medical imaging, particularly in the use of convolutional neural networks (CNNs) for the development of automated diagnostic tools. The facility of its non-invasive acquisition makes retinal fundus imaging particularly amenable to such automated approaches. Recent work in the analysis of fundus images using CNNs relies on access to massive datasets for training and validation, composed of hundreds of thousands of images. However, data residency and data privacy restrictions stymie the applicability of this approach in medical settings where patient confidentiality is a mandate. Here, we showcase results for the performance of DL on small datasets to classify patient sex from fundus images—a trait thought not to be present or quantifiable in fundus images until recently. Specifically, we fine-tune a Resnet-152 model whose last layer has been modified to a fully-connected layer for binary classification. We carried out several experiments to assess performance in the small dataset context using one private (DOVS) and one public (ODIR) data source. Our models, developed using approximately 2500 fundus images, achieved test AUC scores of up to 0.72 (95% CI: [0.67, 0.77]). This corresponds to a mere 25% decrease in performance despite a nearly 1000-fold decrease in the dataset size compared to prior results in the literature. Our results show that binary classification, even with a hard task such as sex categorization from retinal fundus images, is possible with very small datasets. Our domain adaptation results show that models trained with one distribution of images may generalize well to an independent external source, as in the case of models trained on DOVS and tested on ODIR. Our results also show that eliminating poor quality images may hamper training of the CNN due to reducing the already small dataset size even further. Nevertheless, using high quality images may be an important factor as evidenced by superior generalizability of results in the domain adaptation experiments. Finally, our work shows that ensembling is an important tool in maximizing performance of deep CNNs in the context of small development datasets. Public Library of Science 2023-08-03 /pmc/articles/PMC10399793/ /pubmed/37535591 http://dx.doi.org/10.1371/journal.pone.0289211 Text en © 2023 Berk et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Berk, Aaron
Ozturan, Gulcenur
Delavari, Parsa
Maberley, David
Yılmaz, Özgür
Oruc, Ipek
Learning from small data: Classifying sex from retinal images via deep learning
title Learning from small data: Classifying sex from retinal images via deep learning
title_full Learning from small data: Classifying sex from retinal images via deep learning
title_fullStr Learning from small data: Classifying sex from retinal images via deep learning
title_full_unstemmed Learning from small data: Classifying sex from retinal images via deep learning
title_short Learning from small data: Classifying sex from retinal images via deep learning
title_sort learning from small data: classifying sex from retinal images via deep learning
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10399793/
https://www.ncbi.nlm.nih.gov/pubmed/37535591
http://dx.doi.org/10.1371/journal.pone.0289211
work_keys_str_mv AT berkaaron learningfromsmalldataclassifyingsexfromretinalimagesviadeeplearning
AT ozturangulcenur learningfromsmalldataclassifyingsexfromretinalimagesviadeeplearning
AT delavariparsa learningfromsmalldataclassifyingsexfromretinalimagesviadeeplearning
AT maberleydavid learningfromsmalldataclassifyingsexfromretinalimagesviadeeplearning
AT yılmazozgur learningfromsmalldataclassifyingsexfromretinalimagesviadeeplearning
AT orucipek learningfromsmalldataclassifyingsexfromretinalimagesviadeeplearning