Conventional knowledge-tracing models require thousands of question–answer pairs and offer little pedagogical insight, a mismatch to real classrooms where teachers rely on sparse evidence and explicit reasoning to guide intervention. Leveraging the few-shot in-context learning capability of GPT-4 and GLM-4, the proposed “observation–cognition–interpretation” pipeline first selects a small set of representative attempts, then fuses item text and skill tags to infer mastery, and finally articulates weaknesses and remedial suggestions in plain language. Experiments on FrcSub, MOOCRadar, and XES3G5M show that with only 4–16 samples per learner the approach matches or surpasses deep baselines such as DKT, AKT, and SAINT, while expert raters deem its explanations substantially credible. By coupling accurate prediction with actionable feedback under extreme data constraints, the study opens a practical path toward small-sample, strongly interpretable learning analytics and lays groundwork for extending assessment to open-ended problems and multimodal coursework.
The work titled “Explainable Few-shot Knowledge Tracing”, was published on
Frontiers of Digital Education (published on September 22, 2025).
DOI:
10.1007/s44366-025-0071-x