Cargando…

CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning

Training deep learning models on medical images heavily depends on experts’ expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-tra...

Descripción completa

Detalles Bibliográficos
Autores principales: Cho, Kyungjin, Kim, Ki Duk, Nam, Yujin, Jeong, Jiheon, Kim, Jeeyoung, Choi, Changyong, Lee, Soyoung, Lee, Jun Soo, Woo, Seoyeon, Hong, Gil-Sun, Seo, Joon Beom, Kim, Namkug
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10287612/
https://www.ncbi.nlm.nih.gov/pubmed/36702988
http://dx.doi.org/10.1007/s10278-023-00782-4
_version_ 1785061910172925952
author Cho, Kyungjin
Kim, Ki Duk
Nam, Yujin
Jeong, Jiheon
Kim, Jeeyoung
Choi, Changyong
Lee, Soyoung
Lee, Jun Soo
Woo, Seoyeon
Hong, Gil-Sun
Seo, Joon Beom
Kim, Namkug
author_facet Cho, Kyungjin
Kim, Ki Duk
Nam, Yujin
Jeong, Jiheon
Kim, Jeeyoung
Choi, Changyong
Lee, Soyoung
Lee, Jun Soo
Woo, Seoyeon
Hong, Gil-Sun
Seo, Joon Beom
Kim, Namkug
author_sort Cho, Kyungjin
collection PubMed
description Training deep learning models on medical images heavily depends on experts’ expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10278-023-00782-4.
format Online
Article
Text
id pubmed-10287612
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-102876122023-06-24 CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning Cho, Kyungjin Kim, Ki Duk Nam, Yujin Jeong, Jiheon Kim, Jeeyoung Choi, Changyong Lee, Soyoung Lee, Jun Soo Woo, Seoyeon Hong, Gil-Sun Seo, Joon Beom Kim, Namkug J Digit Imaging Article Training deep learning models on medical images heavily depends on experts’ expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10278-023-00782-4. Springer International Publishing 2023-01-26 2023-06 /pmc/articles/PMC10287612/ /pubmed/36702988 http://dx.doi.org/10.1007/s10278-023-00782-4 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Cho, Kyungjin
Kim, Ki Duk
Nam, Yujin
Jeong, Jiheon
Kim, Jeeyoung
Choi, Changyong
Lee, Soyoung
Lee, Jun Soo
Woo, Seoyeon
Hong, Gil-Sun
Seo, Joon Beom
Kim, Namkug
CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
title CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
title_full CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
title_fullStr CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
title_full_unstemmed CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
title_short CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
title_sort chess: chest x-ray pre-trained model via self-supervised contrastive learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10287612/
https://www.ncbi.nlm.nih.gov/pubmed/36702988
http://dx.doi.org/10.1007/s10278-023-00782-4
work_keys_str_mv AT chokyungjin chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT kimkiduk chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT namyujin chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT jeongjiheon chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT kimjeeyoung chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT choichangyong chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT leesoyoung chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT leejunsoo chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT wooseoyeon chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT honggilsun chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT seojoonbeom chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning
AT kimnamkug chesschestxraypretrainedmodelviaselfsupervisedcontrastivelearning