Cargando…

Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation

Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be ut...

Descripción completa

Detalles Bibliográficos
Autores principales: Park, Sangjoon, Kim, Gwanghyun, Oh, Yujin, Seo, Joon Beom, Lee, Sang Min, Kim, Jin Hwan, Moon, Sungjun, Lim, Jae-Kwang, Park, Chang Min, Ye, Jong Chul
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9252561/
https://www.ncbi.nlm.nih.gov/pubmed/35789159
http://dx.doi.org/10.1038/s41467-022-31514-x
_version_ 1784740291117318144
author Park, Sangjoon
Kim, Gwanghyun
Oh, Yujin
Seo, Joon Beom
Lee, Sang Min
Kim, Jin Hwan
Moon, Sungjun
Lim, Jae-Kwang
Park, Chang Min
Ye, Jong Chul
author_facet Park, Sangjoon
Kim, Gwanghyun
Oh, Yujin
Seo, Joon Beom
Lee, Sang Min
Kim, Jin Hwan
Moon, Sungjun
Lim, Jae-Kwang
Park, Chang Min
Ye, Jong Chul
author_sort Park, Sangjoon
collection PubMed
description Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be utilized due to the absence of labels, especially in deprived areas. In this study, we present a framework named distillation for self-supervision and self-train learning (DISTL) inspired by the learning process of the radiologists, which can improve the performance of vision transformer simultaneously with self-supervision and self-training through knowledge distillation. In external validation from three hospitals for diagnosis of tuberculosis, pneumothorax, and COVID-19, DISTL offers gradually improved performance as the amount of unlabeled data increase, even better than the fully supervised model with the same amount of labeled data. We additionally show that the model obtained with DISTL is robust to various real-world nuisances, offering better applicability in clinical setting.
format Online
Article
Text
id pubmed-9252561
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-92525612022-07-05 Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation Park, Sangjoon Kim, Gwanghyun Oh, Yujin Seo, Joon Beom Lee, Sang Min Kim, Jin Hwan Moon, Sungjun Lim, Jae-Kwang Park, Chang Min Ye, Jong Chul Nat Commun Article Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be utilized due to the absence of labels, especially in deprived areas. In this study, we present a framework named distillation for self-supervision and self-train learning (DISTL) inspired by the learning process of the radiologists, which can improve the performance of vision transformer simultaneously with self-supervision and self-training through knowledge distillation. In external validation from three hospitals for diagnosis of tuberculosis, pneumothorax, and COVID-19, DISTL offers gradually improved performance as the amount of unlabeled data increase, even better than the fully supervised model with the same amount of labeled data. We additionally show that the model obtained with DISTL is robust to various real-world nuisances, offering better applicability in clinical setting. Nature Publishing Group UK 2022-07-04 /pmc/articles/PMC9252561/ /pubmed/35789159 http://dx.doi.org/10.1038/s41467-022-31514-x Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Park, Sangjoon
Kim, Gwanghyun
Oh, Yujin
Seo, Joon Beom
Lee, Sang Min
Kim, Jin Hwan
Moon, Sungjun
Lim, Jae-Kwang
Park, Chang Min
Ye, Jong Chul
Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
title Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
title_full Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
title_fullStr Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
title_full_unstemmed Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
title_short Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation
title_sort self-evolving vision transformer for chest x-ray diagnosis through knowledge distillation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9252561/
https://www.ncbi.nlm.nih.gov/pubmed/35789159
http://dx.doi.org/10.1038/s41467-022-31514-x
work_keys_str_mv AT parksangjoon selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT kimgwanghyun selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT ohyujin selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT seojoonbeom selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT leesangmin selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT kimjinhwan selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT moonsungjun selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT limjaekwang selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT parkchangmin selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation
AT yejongchul selfevolvingvisiontransformerforchestxraydiagnosisthroughknowledgedistillation