ArXiv Preprint
In recent years, Graph Neural Networks have reported outstanding performance
in tasks like community detection, molecule classification and link prediction.
However, the black-box nature of these models prevents their application in
domains like health and finance, where understanding the models' decisions is
essential. Counterfactual Explanations (CE) provide these understandings
through examples. Moreover, the literature on CE is flourishing with novel
explanation methods which are tailored to graph learning.
In this survey, we analyse the existing Graph Counterfactual Explanation
methods, by providing the reader with an organisation of the literature
according to a uniform formal notation for definitions, datasets, and metrics,
thus, simplifying potential comparisons w.r.t to the method advantages and
disadvantages. We discussed seven methods and sixteen synthetic and real
datasets providing details on the possible generation strategies. We highlight
the most common evaluation strategies and formalise nine of the metrics used in
the literature. We first introduce the evaluation framework GRETEL and how it
is possible to extend and use it while providing a further dimension of
comparison encompassing reproducibility aspects. Finally, we provide a
discussion on how counterfactual explanation interplays with privacy and
fairness, before delving into open challenges and future works.
Mario Alfonso Prado-Romero, Bardh Prenkaj, Giovanni Stilo, Fosca Giannotti
2022-10-21