Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Computational intelligence and neuroscience

At present, the image inpainting method based on deep learning has achieved a better inpainting effect than traditional methods, but the inpainting results still have problems such as local structure disorder and blurred texture when the images involving a large defect area are processed. This paper proposes a second-order generative image inpainting model based on edge and feature self-arrangement constraints. The model consists of two parts: edge repair network and image repair network. Based on the self-encoder, the edge repair network generates the edges in the defect area according to the known information of the image and improves the edge repair effect by minimizing the adversarial loss and feature matching loss. The image inpainting network fills the defect area with the edge repair result as a priori condition. On the basis of U-Net, the feature self-arrangement module (FSM) is proposed to reconstruct the coding features of a specific scale, and the reconstructed feature skips to connect the decoding layer of the same scale, and it is fused with the upper layer underlying features for decoding. Meanwhile, guide loss, adversarial loss, and reconstruction loss are introduced to narrow the difference between the repaired image and the original image. The experimental results show that the inpainting results of the proposed model have stronger structural connectivity and clearer textures and the performance of PSNR, SSIM, and mean L1 loss in the Celeba, Facade, and Places2 is better than other inpainting methods, indicating that the algorithm can produce an inpainting effect with highly connected structure, reasonable semantics, and fine details.

Yao Fan, Chu Yanli