Managing irregular multivariate time series (IMTS) data is a significant challenge due to inherent irregularities and missing values. Recent advancements have utilized graph neural networks (GNNs) to address these issues, but these methods often require a reliable pre-existing or inferred graph structure to capture node correlations accurately. This becomes problematic in applications where IMTS data are streamed, making it impractical to wait for future data to estimate a suitable graph structure.
To tackle this challenge, a research team led by Dr. Miao Xu has developed a novel method leveraging an instance-attention mechanism within a dynamic GNN framework. Their
research was published on 15 August 2025 in
Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
In their research, the team developed a dynamic GNN model with an instance-attention mechanism that dynamically learns and updates graph edge weights for real-time analysis. This method is tailored for both high-frequency and low-frequency data, enhancing prediction accuracy.
The proposed DynIMTS framework is a recurrent architecture, consisting of graph learning, embedding learning, and spatial-temporal learning components. In the graph learning component, sensor embeddings from the previous timestamp are integrated into a unified graph through instance-attention mechanisms, optimizing edge weights for each data sample. The embedding learning component processes current timestamp inputs via an MLP and Graph Convolution Network (GCN) to update node representations using the learned graph structure. Finally, the spatial-temporal learning component refines sensor embeddings with a recurrent graph convolution network, producing instance-specific embeddings that are fed back into the model for continuous evolution, enhancing performance in imputation and classification tasks.
For imputation tasks, the plots across eight dimensions show that the predicted values closely align with the ground-truth values, capturing the overall trends and fluctuations accurately.
For classification tasks, it is evident that on the P12 dataset, DynIMTS consistently outperformed all baseline models across both evaluated metrics.
The graph learning and evolving process is illustrated in the figure: during the initial training stages, edge values exhibit high similarity. As training progresses, variations in color intensity among edges indicate shifts in dependencies across nodes. Lighter colors become lighter, and darker colors intensify until the graph converges.
Future research will focus on refining the model to enhance its accuracy and efficiency further. This includes exploring additional real-world datasets, expanding the experimental platform, and optimizing the model’s parameters to ensure robust performance across various applications.
DOI:
10.1007/s11704-024-40449-z