Knowledge graph completion (KGC) aims to fill in missing entities and relations within knowledge graphs (KGs) to address their incompleteness. Most existing KGC models suffer from knowledge coverage as they are designed to operate within a single KG. In contrast, Multilingual KGC (MKGC) leverages seed pairs from different language KGs to facilitate knowledge transfer and enhance the completion of the target KG. Previous studies on MKGC based on graph neural networks (GNN) have primarily focused on using relation-aware GNNs to capture the combined features of neighboring entities and relations. However, these studies still have some shortcomings, particularly in the context of MKGCs. First, each language’s specific semantics, structures, and expressions contribute to the increased heterogeneity of the KG. Therefore, the completion of MKGCs necessitates a thorough consideration of the heterogeneity of the KG and the effective integration of its heterogeneous features. Second, MKGCs typically have a large graph scale due to the need to store and manage information from multiple languages. However, current relation-aware Graph Neural Networks (GNNs) often inherit complex GNN operations, resulting in unnecessary complexity. Therefore, it is necessary to simplify GNN operations.
To solve the problems, a research team led by Xindong WU published their
new research on 15 July 2025 in
Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team proposed a completion method based on Simplified Multi-View GNNs. This method was tested on two public multilingual knowledge graph datasets. Compared to existing research, this method achieved better completion accuracy and improved the training time of Multi-View GNNs.
In the research, they analyzed the heterogeneity of multilingual knowledge graphs and leveraged Multi-View GNNs to fully learn the features of multilingual KGs. Additionally, they simplified the Multi-View GNN by retaining feature propagation while discarding linear transformations and non-linear activations, reducing unnecessary complexity while effectively utilizing graph contextual information.
First, the SM-GNN model is composed of two modules: MKGC and entity alignment. The MKGC module is used for the KGC task, while the entity alignment module generates alignment pairs between multilingual KGs for knowledge transfer. Second, SM-GNN employs two simplified Multi-View GNNs for the MKGC and entity alignment modules, to learn the most beneficial features for different tasks. Finally, the MKGC and alignment modules are trained iteratively until convergence.
Future research could explore how to better handle the heterogeneity of multilingual knowledge graphs to enhance the generalizability and adaptability of the method. Additionally, investigations could be made into how to balance the requirements for consistency and diversity in the feature fusion process to better complete the graph.
DOI:
10.1007/s11704-024-3577-3