GNNExplainer: Generating Explanations for Graph Neural Networks
General tool for explaining predictions made by graph neural networks (GNNs).
graph-neural-networks interpretability explainability graphs research tutorial code library

Given a trained GNN model and an instance as its input, the GNN-Explainer produces an explanation of the GNN model prediction via a compact subgraph structure, as well as a set of feature dimensions important for its prediction.

Don't forget to tag @RexYing , @profjure in your comment, otherwise they may not be notified.

Authors community post
Share this project
Similar projects
Lagrangian Neural Networks
Trying to learn a simulation? Try Lagrangian Neural Networks, which explicitly conserve energy and may generalize better!
SuperGlue: Learning Feature Matching with Graph Neural Networks
SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points.
Deep Learning With Graph-Structured Representations
Novel approaches based on the theme of structuring the representations and computations of neural network-based models in the form of a graph.
Do we Need Deep Graph Neural Networks?
Does depth in graph neural network architectures bring any advantage?
Top collections