Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Neural networks : the official journal of the International Neural Network Society

Self-attention mechanism has been successfully introduced in Graph Neural Networks (GNNs) for graph representation learning and achieved state-of-the-art performances in tasks such as node classification and node attacks. In most existing attention-based GNNs, attention score is only computed between two directly connected nodes with their representation at a single layer. However, this attention score computation method cannot account for its multi-hop neighbors, which supply graph structure information and have influence on many tasks such as link prediction, knowledge graph completion, and adversarial attack as well. In order to address this problem, in this paper, we propose Path Reliability-based Graph Attention Networks (PRGATs), a novel method to incorporate multi-hop neighboring context into attention score computation, enabling to capture longer-range dependencies and large-scale structural information within a single layer. Moreover, path reliability-based attention layer, a core layer of PRGATs, uses a resource-constrain allocation algorithm to compute the reliable path and its attention scores from neighboring nodes to non-neighboring nodes, increasing the receptive field for every message-passing layer. Experimental results on real-world datasets show that, as compared with baselines, our model outperforms existing methods up to 3% on standard node classification and 12% on graph universal adversarial attack.

Li Yayang, Liang Shuqing, Jiang Yuncheng

2022-Nov-19

Deep learning, Graph Neural Networks, Graph attention network, Graph transformer, Path reliability