Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Heliyon

Covid-19 has posed a serious threat to the existence of the human race. Early detection of the virus is vital to effectively containing the virus and treating the patients. Profound testing methods such as the Real-time reverse transcription-polymerase chain reaction (RT-PCR) test and the Rapid Antigen Test (RAT) are being used for detection, but they have their limitations. The need for early detection has led researchers to explore other testing techniques. Deep Neural Network (DNN) models have shown high potential in medical image classification and various models have been built by researchers which exhibit high accuracy for the task of Covid-19 detection using chest X-ray images. However, it is proven that DNNs are inherently susceptible to adversarial inputs, which can compromise the results of the models. In this paper, the adversarial robustness of such Covid-19 classifiers is evaluated by performing common adversarial attacks, which include the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Using these attacks, it is found that the accuracy of the models for Covid-19 samples decreases drastically. In the medical domain, adversarial training is the most widely explored technique to defend against adversarial attacks. However, using this technique requires replacing the original model and retraining it by including adversarial samples. Another defensive technique, High-Level Representation Guided Denoiser (HGD), overcomes this limitation by employing an adversarial filter which is also transferable across models. Moreover, the HGD architecture, being suitable for high-resolution images, makes it a good candidate for medical image applications. In this paper, the HGD architecture has been evaluated as a potential defensive technique for the task of medical image analysis. Experiments carried out show an increased accuracy of up to 82% in the white box setting. However, in the black box setting, the defense completely fails to defend against adversarial samples.

Kansal Keshav, Krishna P Sai, Jain Parshva B, R Surya, Honnavalli Prasad, Eswaran Sivaraman

2022-Oct

Adversarial attacks, Deep neural network, Denoiser, FGSM, HGD, Machine learning, PGD