Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Journal of the Optical Society of America. A, Optics, image science, and vision

At present, deep-learning-based infrared and visible image fusion methods have the problem of extracting insufficient source image features, causing imbalanced infrared and visible information in fused images. To solve the problem, a multiscale feature pyramid network based on activity level weight selection (MFPN-AWS) with a complete downsampling-upsampling structure is proposed. The network consists of three parts: a downsampling convolutional network, an AWS fusion layer, and an upsampling convolutional network. First, multiscale deep features are extracted by downsampling convolutional networks, obtaining rich information of intermediate layers. Second, AWS highlights the advantages of the l1-norm and global pooling dual fusion strategy to describe the characteristics of target saliency and texture detail, and effectively balances the multiscale infrared and visible features. Finally, multiscale fused features are reconstructed by the upsampling convolutional network to obtain fused images. Compared with nine state-of-the-art methods via the publicly available experimental datasets TNO and VIFB, MFPN-AWS reaches more natural and balanced fusion results, such as better overall clarity and salient targets, and achieves optimal values on two metrics: mutual information and visual fidelity.

Xu Rui, Liu Gang, Xie Yuning, Prasad Bavirisetti Durga, Qian Yao, Xing Mengliang

2022-Dec-01