Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Computational intelligence and neuroscience

The convolutional neural network is a very important model of deep learning. It can help avoid the exploding/vanishing gradient problem and improve the generalizability of a neural network if the singular values of the Jacobian of a layer are bounded around 1 in the training process. We propose a new Frobenius norm penalty function for a convolutional kernel tensor to let the singular values of the corresponding transformation matrix be bounded around 1. We show how to carry out the gradient-type methods. This provides a potentially useful regularization method for the weights of convolutional layers.

Guo Pei-Chang

2022