In Optics letters
The diffractive deep neural network (D2NN) has demonstrated its importance in performing various all-optical machine learning tasks, e.g., classification, segmentation, etc. However, deeper D2NNs that provide higher inference complexity are more difficult to train due to the problem of gradient vanishing. We introduce the residual D2NNs (Res-D2NN), which enables us to train substantially deeper diffractive networks by constructing diffractive residual learning blocks to learn the residual mapping functions. Unlike the existing plain D2NNs, Res-D2NNs contribute to the design of a learnable light shortcut to directly connect the input and output between optical layers. Such a shortcut offers a direct path for gradient backpropagation in training, which is an effective way to alleviate the gradient vanishing issue on very deep diffractive neural networks. Experimental results on image classification and pixel super-resolution demonstrate the superiority of Res-D2NNs over the existing plain D2NN architectures.
Dou Hongkun, Deng Yue, Yan Tao, Wu Huaqiang, Lin Xing, Dai Qionghai