In Medical image analysis
This paper proposes a deep learning framework to encode subject-specific transformations between facial and bony shapes for orthognathic surgical planning. Our framework involves a bidirectional point-to-point convolutional network (P2P-Conv) to predict the transformations between facial and bony shapes. P2P-Conv is an extension of the state-of-the-art P2P-Net and leverages dynamic point-wise convolution (i.e., PointConv) to capture local-to-global spatial information. Data augmentation is carried out in the training of P2P-Conv with multiple point subsets from the facial and bony shapes. During inference, network outputs generated for multiple point subsets are combined into a dense transformation. Finally, non-rigid registration using the coherent point drift (CPD) algorithm is applied to generate surface meshes based on the predicted point sets. Experimental results on real-subject data demonstrate that our method substantially improves the prediction of facial and bony shapes over state-of-the-art methods.
Ma Lei, Lian Chunfeng, Kim Daeseung, Xiao Deqiang, Wei Dongming, Liu Qin, Kuang Tianshu, Ghanbari Maryam, Li Guoshi, Gateno Jaime, Shen Steve G F, Wang Li, Shen Dinggang, Xia James J, Yap Pew-Thian
3D point clouds, Face-bone shape transformation, Orthognathic surgical planning, Point-displacement network