In Proceedings of SPIE--the International Society for Optical Engineering
The rapid development of deep-learning methods in medical imaging has called for an analysis method suitable for non-linear and data-dependent algorithms. In this work, we investigate a local linearity analysis where a complex neural network can be represented as piecewise linear systems. We recognize that a large number of neural networks consists of alternating linear layers and rectified linear unit (ReLU) activations, and are therefore strictly piecewise linear. We investigated the extent of these locally linear regions by gradually adding perturbations to an operating point. For this work, we explored perturbations based on image features of interest, including lesion contrast, background, and additive noise. We then developed strategies to extend these strictly locally linear regions to include neighboring linear regions with similar gradients. Using these approximately linear regions, we applied singular value decomposition (SVD) analysis to each local linear system to investigate and explain the overall nonlinear and data-dependent behaviors of neural networks. The analysis was applied to an example CT denoising algorithm trained on thorax CT scans. We observed that the strictly local linear regions are highly sensitive to small signal perturbations. Over a range of lesion contrast from 0.007 to 0.04 mm-1, there is a total of 33992 linear regions. The Jacobians are also shift-variant. However, the Jacobians of neighboring linear regions are very similar. By combining linear regions with similar Jacobians, we narrowed down the number of approximately linear regions to four over lesion contrast from 0.001 to 0.08 mm-1. The SVD analysis to different linear regions revealed denoising behavior that is highly dependent on the background intensity. Analysis further identified greater amount of noise reduction in uniform regions compared to lesion edges. In summary, the local linearity analysis framework we proposed has the potential for us to better characterize and interpret the non-linear and data-dependent behaviors of neural networks.
Li Junyuan, Wang Wenying, Tivnan Matthew, Sulam Jeremias, Prince Jerry L, McNitt-Gray Michael, Stayman J Webster, Gang Grace J
2022-Jun