Chapter8: Graph Neural Networks: Adversarial Robustness

Stephan Günnemann, Technical University of Munich, guennemann@in.tum.de

Abstract

Graph neural networks have achieved impressive results in various graph learning tasks and they have found their way into many applications such as molecular property prediction, cancer classification, fraud detection, or knowledge graph reasoning. With the increasing number of GNN models deployed in scientific applications, safety-critical environments, or decision-making contexts involving humans, it is crucial to ensure their reliability. In this chapter, we provide an overview of the current research on adversarial robustness of GNNs.We introduce the unique challenges and opportunities that come along with the graph setting and give an overview of works showing the limitations of classic GNNs via adversarial example generation. Building upon these insights we introduce and categorize methods that provide provable robustness guarantees for graph neural networks as well as principles for improving robustness of GNNs. We conclude with a discussion of proper evaluation practices taking robustness into account.

Contents

  • Motivation
  • Limitations of Graph Neural Networks: Adversarial Examples
    • Categorization of Adversarial Attacks
    • The Effect of Perturbations and Some Insights
    • Discussion and Future Directions
  • Provable Robustness: Certificates for Graph Neural Networks
    • Model-Specific Certificates
    • Model-Agnostic Certificates
    • Advanced Certification and Discussion
  • Improving Robustness of GNNs
    • Improving the Graph
    • Improving the Training Procedure
    • Improving the Graph Neural Networks' Architecture
    • Discussion and Future Directions
  • Proper Evaluation in the View of Robustness
  • Summary

Citation

@incollection{GNNBook-ch8-gunnemann,
author = "G{\"u}nnemann, Stephan",
editor = "Wu, Lingfei and Cui, Peng and Pei, Jian and Zhao, Liang",
title = "Graph Neural Networks: Adversarial Robustness",
booktitle = "Graph Neural Networks: Foundations, Frontiers, and Applications",
year = "2022",
publisher = "Springer Singapore",
address = "Singapore",
pages = "149--176",
}

S. Günnemann, “Graph neural networks: Adversarial robustness,” in Graph Neural Networks: Foundations, Frontiers, and Applications, L. Wu, P. Cui, J. Pei, and L. Zhao, Eds. Singapore: Springer Singapore, 2022, pp. 149–176.