Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In IEEE journal of biomedical and health informatics

With the advance of medical imaging technologies, multimodal images such as magnetic resonance images (MRI) and positron emission tomography (PET) can capture subtle structural and functional changes of brain, facilating the diagnosis of brain diseases such as Alzheimers disease (AD). In practice, multimodal images may be incomplete since PET is often missing due to high financial cost or availability. Most of existing methods simply excluded subjects with missing data, which unfortunately reduced sample size. In addition, how to extract and combine multimodal features is still challenging. To address these problems, we propose a deep learning framework to integrate a task-induced pyramid and attention generative adversarial network (TPA-GAN) with a pathwise transfer dense convolution network (PT-DCN) for imputation and also classification of multimodal brain images. First, we propose a TPA-GAN to integrate pyramid convolution and attention module as well as disease classification task into GAN for generating the missing PET data with their MRI. Then, with the imputed multimodal brain images, we build a dense convolution network with pathwise transfer blocks to gradually learn and combine multimodal features for final disease classification. Experiments are performed on ADNI-1 and ADNI-2 datasets to evaluate our proposed method, achiving superior performance in image imputation and brain disease diagnosis compared to state-of-the-art methods.

Gao Xingyu, Shi Feng, Shen Dinggang, Liu Manhua

2021-Jul-19