Cargando…

Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging

Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging dom...

Descripción completa

Detalles Bibliográficos
Autores principales: Wolf, Daniel, Payer, Tristan, Lisson, Catharina Silvia, Lisson, Christoph Gerhard, Beer, Meinrad, Götz, Michael, Ropinski, Timo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10662445/
https://www.ncbi.nlm.nih.gov/pubmed/37985685
http://dx.doi.org/10.1038/s41598-023-46433-0
_version_ 1785148540978200576
author Wolf, Daniel
Payer, Tristan
Lisson, Catharina Silvia
Lisson, Christoph Gerhard
Beer, Meinrad
Götz, Michael
Ropinski, Timo
author_facet Wolf, Daniel
Payer, Tristan
Lisson, Catharina Silvia
Lisson, Christoph Gerhard
Beer, Meinrad
Götz, Michael
Ropinski, Timo
author_sort Wolf, Daniel
collection PubMed
description Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging domain, annotated datasets for specific tasks are often small due to the high complexity of annotations, limited access, or the rarity of diseases. To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning. After pre-training, small annotated datasets are sufficient to fine-tune the models for a specific task. The most popular self-supervised pre-training approaches in medical imaging are based on contrastive learning. However, recent studies in natural image processing indicate a strong potential for masked autoencoder approaches. Our work compares state-of-the-art contrastive learning methods with the recently introduced masked autoencoder approach “SparK” for convolutional neural networks (CNNs) on medical images. Therefore, we pre-train on a large unannotated CT image dataset and fine-tune on several CT classification tasks. Due to the challenge of obtaining sufficient annotated training data in medical imaging, it is of particular interest to evaluate how the self-supervised pre-training methods perform when fine-tuning on small datasets. By experimenting with gradually reducing the training dataset size for fine-tuning, we find that the reduction has different effects depending on the type of pre-training chosen. The SparK pre-training method is more robust to the training dataset size than the contrastive methods. Based on our results, we propose the SparK pre-training for medical imaging tasks with only small annotated datasets.
format Online
Article
Text
id pubmed-10662445
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-106624452023-11-20 Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging Wolf, Daniel Payer, Tristan Lisson, Catharina Silvia Lisson, Christoph Gerhard Beer, Meinrad Götz, Michael Ropinski, Timo Sci Rep Article Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging domain, annotated datasets for specific tasks are often small due to the high complexity of annotations, limited access, or the rarity of diseases. To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning. After pre-training, small annotated datasets are sufficient to fine-tune the models for a specific task. The most popular self-supervised pre-training approaches in medical imaging are based on contrastive learning. However, recent studies in natural image processing indicate a strong potential for masked autoencoder approaches. Our work compares state-of-the-art contrastive learning methods with the recently introduced masked autoencoder approach “SparK” for convolutional neural networks (CNNs) on medical images. Therefore, we pre-train on a large unannotated CT image dataset and fine-tune on several CT classification tasks. Due to the challenge of obtaining sufficient annotated training data in medical imaging, it is of particular interest to evaluate how the self-supervised pre-training methods perform when fine-tuning on small datasets. By experimenting with gradually reducing the training dataset size for fine-tuning, we find that the reduction has different effects depending on the type of pre-training chosen. The SparK pre-training method is more robust to the training dataset size than the contrastive methods. Based on our results, we propose the SparK pre-training for medical imaging tasks with only small annotated datasets. Nature Publishing Group UK 2023-11-20 /pmc/articles/PMC10662445/ /pubmed/37985685 http://dx.doi.org/10.1038/s41598-023-46433-0 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Wolf, Daniel
Payer, Tristan
Lisson, Catharina Silvia
Lisson, Christoph Gerhard
Beer, Meinrad
Götz, Michael
Ropinski, Timo
Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
title Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
title_full Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
title_fullStr Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
title_full_unstemmed Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
title_short Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
title_sort self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10662445/
https://www.ncbi.nlm.nih.gov/pubmed/37985685
http://dx.doi.org/10.1038/s41598-023-46433-0
work_keys_str_mv AT wolfdaniel selfsupervisedpretrainingwithcontrastiveandmaskedautoencodermethodsfordealingwithsmalldatasetsindeeplearningformedicalimaging
AT payertristan selfsupervisedpretrainingwithcontrastiveandmaskedautoencodermethodsfordealingwithsmalldatasetsindeeplearningformedicalimaging
AT lissoncatharinasilvia selfsupervisedpretrainingwithcontrastiveandmaskedautoencodermethodsfordealingwithsmalldatasetsindeeplearningformedicalimaging
AT lissonchristophgerhard selfsupervisedpretrainingwithcontrastiveandmaskedautoencodermethodsfordealingwithsmalldatasetsindeeplearningformedicalimaging
AT beermeinrad selfsupervisedpretrainingwithcontrastiveandmaskedautoencodermethodsfordealingwithsmalldatasetsindeeplearningformedicalimaging
AT gotzmichael selfsupervisedpretrainingwithcontrastiveandmaskedautoencodermethodsfordealingwithsmalldatasetsindeeplearningformedicalimaging
AT ropinskitimo selfsupervisedpretrainingwithcontrastiveandmaskedautoencodermethodsfordealingwithsmalldatasetsindeeplearningformedicalimaging