Chapter18: Graph Neural Networks: Self-supervised Learning

Yu Wang, Vanderbilt University, yu.wang.1@vanderbilt.edu
Wei Jin, Michigan State University, jinwei2@msu.edu
Tyler Derr, Vanderbilt University, tyler.derr@vanderbilt.edu

Abstract

Although deep learning has achieved state-of-the-art performance across numerous domains, these models generally require large annotated datasets to reach their full potential and avoid overfitting. However, obtaining such datasets can have high associated costs or even be impossible to procure. Self-supervised learning (SSL) seeks to create and utilize specific pretext tasks on unlabeled data to aid in alleviating this fundamental limitation of deep learning models. Although initially applied in the image and text domains, recent interest has been in leveraging SSL in the graph domain to improve the performance of graph neural networks (GNNs). For node-level tasks, GNNs can inherently incorporate unlabeled node data through the neighborhood aggregation unlike in the image or text domains; but they can still benefit by applying novel pretext tasks to encode richer information and numerous such methods have recently been developed. For GNNs solving graph-level tasks, applying SSL methods is more aligned with other traditional domains, but still presents unique challenges and has been the focus of a few works. In this chapter, we summarize recent developments in applying SSL to GNNs categorizing them via the different training strategies and types of data used to construct their pretext tasks, and finally discuss open challenges for future directions.

Contents

  • Introduction
  • Self-supervised Learning
  • Applying SSL to GNNs: Catogorizing Training Strategies, Loss Functions and Pretext Tasks
    • Training Strategies
    • Loss Functions
    • Pretext Tasks
  • Node-level SSL Pretext Tasks
    • Structure-based Pretext Tasks
    • Feature-based Pretext Tasks
    • Hybrid Pretext Tasks
  • Graph-level SSL Pretext Tasks
    • Structure-based Pretext Tasks
    • Feature-based Pretext Tasks
    • Hybrid Pretext Tasks
  • Node-graph-level SSL Pretext Tasks
  • Discussion
  • Summary

Citation

@incollection{GNNBook-ch18-wang,
author = "Wang, Yu and Jin, Wei and Derr, Tyler",
editor = "Wu, Lingfei and Cui, Peng and Pei, Jian and Zhao, Liang",
title = "Graph Neural Networks: Self-supervised Learning ",
booktitle = "Graph Neural Networks: Foundations, Frontiers, and Applications",
year = "2022",
publisher = "Springer Singapore",
address = "Singapore",
pages = "391--420",
}

Y. Wang, W. Jin, and T. Derr, “Graph neural networks: Self-supervised learning,” in Graph Neural Networks: Foundations, Frontiers, and Applications, L. Wu, P. Cui, J. Pei, and L. Zhao, Eds. Singapore: Springer Singapore, 2022, pp. 391–420.