In Computers in biology and medicine
-Deep learning techniques are proving instrumental in identifying, classifying, and quantifying patterns in medical images. Segmentation is one of the important applications in medical image analysis. The U-Net has become the predominant deep-learning approach to medical image segmentation tasks. Existing U-Net based models have limitations in several respects, however, including: the requirement for millions of parameters in the U-Net, which consumes considerable computational resources and memory; the lack of global information; and incomplete segmentation in difficult cases. To remove some of those limitations, we built on our previous work and applied two modifications to improve the U-Net model: 1) we designed and added the dilated channel-wise CNN module and 2) we simplified the U-shape network. We then proposed a novel light-weight architecture, the Channel-wise Feature Pyramid Network for Medicine (CFPNet-M). To evaluate our method, we selected five datasets from different imaging modalities: thermography, electron microscopy, endoscopy, dermoscopy, and digital retinal images. We compared its performance with several models having a variety of complexities. We used the Tanimoto similarity instead of the Jaccard index for gray-level image comparisons. The CFPNet-M achieves segmentation results on all five medical datasets that are comparable to existing methods, yet require only 8.8 MB memory, and just 0.65 million parameters, which is about 2% of U-Net. Unlike other deep-learning segmentation methods, this new approach is suitable for real-time application: its inference speed can reach 80 frames per second when implemented on a single RTX 2070Ti GPU with an input image size of 256 × 192 pixels.
Lou Ange, Guan Shuyue, Loew Murray
2023-Jan-24
CFPNet-M, Light-weight network, Medical image, Real-time segmentation, Tanimoto similarity