Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Neural networks : the official journal of the International Neural Network Society

We consider synchronous data-parallel neural network training with a fixed large batch size. While the large batch size provides a high degree of parallelism, it degrades the generalization performance due to the low gradient noise scale. We propose a general learning rate adjustment framework and three critical heuristics that tackle the poor generalization issue. The key idea is to adjust the learning rate based on geometric information of loss landscape and encourage the model to converge into a flat minimum that is known to better generalize to the unknown data. Our empirical study demonstrates that the Hessian-aware learning rate schedule remarkably improves the generalization performance in large-batch training. For CIFAR-10 classification with ResNet20, our method achieves 92.31% accuracy using 16,384 batch size, which is close to 92.83% achieved using 128 batch size, at a negligible extra computational cost.

Lee Sunwoo, He Chaoyang, Avestimehr Salman

2022-Nov-11

Deep learning, Hessian information, Large-batch training, Learning rate adjustment