Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Neural networks : the official journal of the International Neural Network Society

Residual Network (ResNet) achieves deeper and wider networks with high-performance gains, representing a powerful convolutional neural network architecture. In this paper, we propose architectural refinements to ResNet that address the information flow through several layers of the network, including the input stem, downsampling block, projection shortcut, and identity blocks. We will show that our collective refinements facilitate stable backpropagation by preserving the norm of the error gradient within the residual blocks, which can reduce the optimization difficulties of training very deep networks. Our proposed modifications enhance the learning dynamics, resulting in high accuracy and inference performance by enforcing norm-preservation throughout the network training. The effectiveness of our method is verified by extensive experimental results on five computer vision tasks, including image classification (ImageNet and CIFAR-100), video classification (Kinetics-400), multi-label image recognition (MS-COCO), object detection and semantic segmentation (PASCAL VOC). We also empirically show consistent improvements in generalization performance when applying our modifications over different networks to provide new insights and inspire new architectures. The source code is publicly available at: https://github.com/bharatmahaur/LeNo.

Mahaur Bharat, Mishra K K, Singh Navjot

2022-Oct-28

Convolutional neural networks, Deep learning, Gradient flow, Norm preservation, Optimization stability, Residual Networks