Cargando…
CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning
Automatic radiological report generation (ARRG) smoothens the clinical workflow by speeding the report generation task. Recently, various deep neural networks (DNNs) have been used for report generation and have achieved promising results. Despite the impressive results, their deployment remains cha...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Berlin Heidelberg
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9628486/ https://www.ncbi.nlm.nih.gov/pubmed/36338854 http://dx.doi.org/10.1007/s12652-022-04454-z |
_version_ | 1784823203761225728 |
---|---|
author | Kaur, Navdeep Mittal, Ajay |
author_facet | Kaur, Navdeep Mittal, Ajay |
author_sort | Kaur, Navdeep |
collection | PubMed |
description | Automatic radiological report generation (ARRG) smoothens the clinical workflow by speeding the report generation task. Recently, various deep neural networks (DNNs) have been used for report generation and have achieved promising results. Despite the impressive results, their deployment remains challenging because of their size and complexity. Researchers have proposed several pruning methods to reduce the size of DNNs. Inspired by the one-shot weight pruning methods, we present CheXPrune, a multi-attention based sparse radiology report generation method. It uses encoder-decoder based architecture equipped with a visual and semantic attention mechanism. The model is 70% pruned during the training to achieve 3.33[Formula: see text] compression without sacrificing its accuracy. The empirical results evaluated on the OpenI dataset using BLEU, ROUGE, and CIDEr metrics confirm the accuracy of the sparse model viz-[Formula: see text] -viz the dense model. |
format | Online Article Text |
id | pubmed-9628486 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer Berlin Heidelberg |
record_format | MEDLINE/PubMed |
spelling | pubmed-96284862022-11-02 CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning Kaur, Navdeep Mittal, Ajay J Ambient Intell Humaniz Comput Original Research Automatic radiological report generation (ARRG) smoothens the clinical workflow by speeding the report generation task. Recently, various deep neural networks (DNNs) have been used for report generation and have achieved promising results. Despite the impressive results, their deployment remains challenging because of their size and complexity. Researchers have proposed several pruning methods to reduce the size of DNNs. Inspired by the one-shot weight pruning methods, we present CheXPrune, a multi-attention based sparse radiology report generation method. It uses encoder-decoder based architecture equipped with a visual and semantic attention mechanism. The model is 70% pruned during the training to achieve 3.33[Formula: see text] compression without sacrificing its accuracy. The empirical results evaluated on the OpenI dataset using BLEU, ROUGE, and CIDEr metrics confirm the accuracy of the sparse model viz-[Formula: see text] -viz the dense model. Springer Berlin Heidelberg 2022-11-01 2023 /pmc/articles/PMC9628486/ /pubmed/36338854 http://dx.doi.org/10.1007/s12652-022-04454-z Text en © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Original Research Kaur, Navdeep Mittal, Ajay CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning |
title | CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning |
title_full | CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning |
title_fullStr | CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning |
title_full_unstemmed | CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning |
title_short | CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning |
title_sort | chexprune: sparse chest x-ray report generation model using multi-attention and one-shot global pruning |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9628486/ https://www.ncbi.nlm.nih.gov/pubmed/36338854 http://dx.doi.org/10.1007/s12652-022-04454-z |
work_keys_str_mv | AT kaurnavdeep chexprunesparsechestxrayreportgenerationmodelusingmultiattentionandoneshotglobalpruning AT mittalajay chexprunesparsechestxrayreportgenerationmodelusingmultiattentionandoneshotglobalpruning |