Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Methods (San Diego, Calif.)

Self-supervised learning has shown superior performance on graph-related tasks in recent years. The most advanced methods are based on contrast learning, which severely limited by structured data augmentation techniques and complex training methods. Generative self-supervised learning, especially graph autoencoders (GAEs), can prevent the above dependence and has been demonstrated as an effective approach. In addition, most previous works only reconstruct the graph topological structure or node features. Few works consider both and combine them together to obtain their complementary information. To overcome these problems, we propose a generative self-supervised graph representation learning methodology named Multi-View Dual-decoder Graph Autoencoder (MDGA). Specifically, we first design a multi-sample graph learning strategy which benefits the generalization of the dual-decoder graph autoencoder. Moreover, the proposed model reconstructs the graph topological structure with a traditional GAE and extracts node attributes by masked feature reconstruction. Experimental results on five public benchmark datasets demonstrate that MDGA outperforms state-of-the-art methods in both node classification and link prediction tasks.

He Mengyao, Zhao Qingqing, Zhang Han

2023-Feb-13

Graph autoencoder, Graph neural networks, Graph representation learning, Self-supervised learning