In Journal of ambient intelligence and humanized computing
Automatic radiological report generation (ARRG) smoothens the clinical workflow by speeding the report generation task. Recently, various deep neural networks (DNNs) have been used for report generation and have achieved promising results. Despite the impressive results, their deployment remains challenging because of their size and complexity. Researchers have proposed several pruning methods to reduce the size of DNNs. Inspired by the one-shot weight pruning methods, we present CheXPrune, a multi-attention based sparse radiology report generation method. It uses encoder-decoder based architecture equipped with a visual and semantic attention mechanism. The model is 70% pruned during the training to achieve 3.33 × compression without sacrificing its accuracy. The empirical results evaluated on the OpenI dataset using BLEU, ROUGE, and CIDEr metrics confirm the accuracy of the sparse model viz- a ` -viz the dense model.
Kaur Navdeep, Mittal Ajay
2022-Nov-01
Chest radiographs, Deep-learning, Multi-attention, Pruning, Radiological report generation, Radiological reports, Sparse DNN, Textual description