Cargando…

CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning

Automatic radiological report generation (ARRG) smoothens the clinical workflow by speeding the report generation task. Recently, various deep neural networks (DNNs) have been used for report generation and have achieved promising results. Despite the impressive results, their deployment remains cha...

Descripción completa

Detalles Bibliográficos
Autores principales: Kaur, Navdeep, Mittal, Ajay
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9628486/
https://www.ncbi.nlm.nih.gov/pubmed/36338854
http://dx.doi.org/10.1007/s12652-022-04454-z
Descripción
Sumario:Automatic radiological report generation (ARRG) smoothens the clinical workflow by speeding the report generation task. Recently, various deep neural networks (DNNs) have been used for report generation and have achieved promising results. Despite the impressive results, their deployment remains challenging because of their size and complexity. Researchers have proposed several pruning methods to reduce the size of DNNs. Inspired by the one-shot weight pruning methods, we present CheXPrune, a multi-attention based sparse radiology report generation method. It uses encoder-decoder based architecture equipped with a visual and semantic attention mechanism. The model is 70% pruned during the training to achieve 3.33[Formula: see text] compression without sacrificing its accuracy. The empirical results evaluated on the OpenI dataset using BLEU, ROUGE, and CIDEr metrics confirm the accuracy of the sparse model viz-[Formula: see text] -viz the dense model.