Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Neural networks : the official journal of the International Neural Network Society

Pairwise learning usually refers to the learning problem that works with pairs of training samples, such as ranking, similarity and metric learning, and AUC maximization. To overcome the challenge of pairwise learning in the large scale computation, this paper introduces Nyström sampling approach to the coefficient-based regularized pairwise algorithm in the context of kernel networks. Our theorems establish that the obtained Nyström estimator achieves the minimax error over all estimators using the whole data provided that the subsampling level is not too small. We derive the function relation between the subsampling level and regularization parameter that guarantees computation cost reduction and asymptotic behaviors' optimality simultaneously. The Nyström coefficient-based pairwise learning method does not require the kernel to be symmetric or positive semi-definite, which provides more flexibility and adaptivity in the learning process. We apply the method to the bipartite ranking problem, which improves the state-of-the-art theoretical results in previous works. By developing probability inequalities for U-statistics on Hilbert-Schmidt operators, we provide new mathematical tools for handling pairs of examples involved in pairwise learning.

Wang Cheng, Hu Ting, Jiang Siyang

2022-Oct-21

Coefficient-based regularization, Convergence rate, Kernel network, Nyström sampling approach, Pairwise learning