The remarkable success in graph neural networks (GNNs) promotes the eXplainable Graph Learning (XGL) methods. However, existing XGL approaches are susceptible to exploiting shortcuts (aka, spurious correlations) in the data to yield task results and compose explanations, undermining the trustworthiness and reliability of XGL.
To solve the problems, a research team led by Qi LIU published their
new research on 15 August 2025 in
Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team proposes a Shortcut-guided Graph Rationalization (SGR) method, which identifies rationales as explanations by learning from shortcuts. Specifically, SGR consists of two training stages. In the first stage, they train a shortcut guider with an early stop strategy to obtain shortcut information. During the second stage, SGR separates the graph into the rationale and non-rationale subgraphs. Then, SGR lets them learn from the shortcut information generated by the frozen shortcut guider to identify which information belongs to shortcuts and which does not. Finally, they employ the non-rationale subgraphs as environments and identify the invariant rationales which filter out the shortcuts under environment shifts. Extensive experiments conducted on synthetic and real-world datasets provide clear validation of the effectiveness of the proposed SGR method, underscoring its ability to provide faithful explanations.
Future work can focus on applying the shortcut-guided methods to other domains such as natural language processing, image recognition, and time-series analysis to evaluate its effectiveness across different types of data.
DOI:
10.1007/s11704-024-40452-4