Tipo de contenido material para medios audiovisuales:
Comienzo del material para medios audiovisuales:
Duración del material para medios audiovisuales:
This research presents a groundbreaking approach to addressing the zero-shot cross-domain cognitive diagnosis (ZCCD) challenge in intelligent education systems through the innovative application of large language models (LLMs). The study introduces the LLM-guided Cognitive State Transfer (LCST) paradigm, which effectively bridges the gap between source domains with rich response logs and target domains lacking interaction data. By reformulating cognitive diagnosis as a natural language processing task, the LCST framework leverages LLMs’ exceptional reasoning capabilities to transfer students’ cognitive states across disparate knowledge domains without requiring target-domain training data.
The methodology demonstrates particular strength in its four-module architecture: pre-established cognitive state extraction, cross-domain bridging through concept reasoning, target-domain diagnosis, and feedback constraint mechanisms. Through extensive experimentation across five real-world educational datasets (mathematics, physics, chemistry, history, and geography), the LCST paradigm consistently outperforms traditional random initialization methods and even approaches the performance upper bounds of oracle methods in certain scenarios. Notably, the framework achieves this while maintaining computational efficiency, with smaller LLMs like Llama3.2:3b demonstrating near-optimal performance at significantly reduced computational costs compared to larger models.
The study makes three substantial contributions to the field of AI in education: (1) it formally defines and provides the first comprehensive solution to the ZCCD problem, (2) it pioneers the use of LLMs as educational experts for cross-domain cognitive diagnosis, and (3) it establishes a new benchmark for prompt engineering techniques specifically tailored for educational diagnostic tasks. While the research acknowledges current limitations in prompt sensitivity and domain knowledge coverage, it simultaneously opens promising avenues for future work in adaptive fine-tuning and external knowledge integration. The LCST framework ultimately represents a significant step toward more flexible, data-efficient intelligent tutoring systems capable of providing personalized learning support even in resource-constrained educational contexts.
DOI:10.1007/s44366-025-0054-y