Cargando…
Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation
Intelligent diagnosis has been widely studied in diagnosing novel corona virus disease (COVID-19). Existing deep models typically do not make full use of the global features such as large areas of ground glass opacities, and the local features such as local bronchiolectasis from the COVID-19 chest C...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier Ltd.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232920/ https://www.ncbi.nlm.nih.gov/pubmed/37303605 http://dx.doi.org/10.1016/j.patcog.2023.109732 |
_version_ | 1785052109093208064 |
---|---|
author | Dong, Aimei Liu, Jian Zhang, Guodong Wei, Zhonghe Zhai, Yi Lv, Guohua |
author_facet | Dong, Aimei Liu, Jian Zhang, Guodong Wei, Zhonghe Zhai, Yi Lv, Guohua |
author_sort | Dong, Aimei |
collection | PubMed |
description | Intelligent diagnosis has been widely studied in diagnosing novel corona virus disease (COVID-19). Existing deep models typically do not make full use of the global features such as large areas of ground glass opacities, and the local features such as local bronchiolectasis from the COVID-19 chest CT images, leading to unsatisfying recognition accuracy. To address this challenge, this paper proposes a novel method to diagnose COVID-19 using momentum contrast and knowledge distillation, termed MCT-KD. Our method takes advantage of Vision Transformer to design a momentum contrastive learning task to effectively extract global features from COVID-19 chest CT images. Moreover, in transfer and fine-tuning process, we integrate the locality of convolution into Vision Transformer via special knowledge distillation. These strategies enable the final Vision Transformer simultaneously focuses on global and local features from COVID-19 chest CT images. In addition, momentum contrastive learning is self-supervised learning, solving the problem that Vision Transformer is challenging to train on small datasets. Extensive experiments confirm the effectiveness of the proposed MCT-KD. In particular, our MCT-KD is able to achieve 87.43% and 96.94% accuracy on two publicly available datasets, respectively. |
format | Online Article Text |
id | pubmed-10232920 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Elsevier Ltd. |
record_format | MEDLINE/PubMed |
spelling | pubmed-102329202023-06-01 Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation Dong, Aimei Liu, Jian Zhang, Guodong Wei, Zhonghe Zhai, Yi Lv, Guohua Pattern Recognit Article Intelligent diagnosis has been widely studied in diagnosing novel corona virus disease (COVID-19). Existing deep models typically do not make full use of the global features such as large areas of ground glass opacities, and the local features such as local bronchiolectasis from the COVID-19 chest CT images, leading to unsatisfying recognition accuracy. To address this challenge, this paper proposes a novel method to diagnose COVID-19 using momentum contrast and knowledge distillation, termed MCT-KD. Our method takes advantage of Vision Transformer to design a momentum contrastive learning task to effectively extract global features from COVID-19 chest CT images. Moreover, in transfer and fine-tuning process, we integrate the locality of convolution into Vision Transformer via special knowledge distillation. These strategies enable the final Vision Transformer simultaneously focuses on global and local features from COVID-19 chest CT images. In addition, momentum contrastive learning is self-supervised learning, solving the problem that Vision Transformer is challenging to train on small datasets. Extensive experiments confirm the effectiveness of the proposed MCT-KD. In particular, our MCT-KD is able to achieve 87.43% and 96.94% accuracy on two publicly available datasets, respectively. Elsevier Ltd. 2023-11 2023-06-01 /pmc/articles/PMC10232920/ /pubmed/37303605 http://dx.doi.org/10.1016/j.patcog.2023.109732 Text en © 2023 Elsevier Ltd. All rights reserved. Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID-19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains active. |
spellingShingle | Article Dong, Aimei Liu, Jian Zhang, Guodong Wei, Zhonghe Zhai, Yi Lv, Guohua Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation |
title | Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation |
title_full | Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation |
title_fullStr | Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation |
title_full_unstemmed | Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation |
title_short | Momentum contrast transformer for COVID-19 diagnosis with knowledge distillation |
title_sort | momentum contrast transformer for covid-19 diagnosis with knowledge distillation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232920/ https://www.ncbi.nlm.nih.gov/pubmed/37303605 http://dx.doi.org/10.1016/j.patcog.2023.109732 |
work_keys_str_mv | AT dongaimei momentumcontrasttransformerforcovid19diagnosiswithknowledgedistillation AT liujian momentumcontrasttransformerforcovid19diagnosiswithknowledgedistillation AT zhangguodong momentumcontrasttransformerforcovid19diagnosiswithknowledgedistillation AT weizhonghe momentumcontrasttransformerforcovid19diagnosiswithknowledgedistillation AT zhaiyi momentumcontrasttransformerforcovid19diagnosiswithknowledgedistillation AT lvguohua momentumcontrasttransformerforcovid19diagnosiswithknowledgedistillation |