Explanation-based retrieval boosts grammatical error correction
en-GBde-DEes-ESfr-FR

Explanation-based retrieval boosts grammatical error correction

23/01/2026 Frontiers Journals

Grammatical error correction (GEC) is a key task in natural language processing (NLP), widely applied in education, news, and publishing. Traditional methods mainly rely on sequence-to-sequence (Seq2Seq) and sequence-to-edit (Seq2Edit) models, while large language models (LLMs) have recently shown strong performance in this area.

A research team has now introduced a new method, called RE² (Retrieving Examples with similar Explanation), to address a long-standing challenge: finding reference examples that truly help models correct grammatical errors. The research team led by Baoxin WANG published their new research on 15 December 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
Most existing approaches retrieve examples for correction based on surface-level text similarity. However, sentences with similar wording may contain very different types of errors, leading to mismatched references. For example, two sentences might share the phrase “这青山绿水” (“the green mountain and clear waters”), but their grammatical mistakes could be unrelated.

To solve this problem, the team proposed using grammatical error explanations instead of raw text for retrieval. By matching sentences with similar error patterns, RE² provides models with more relevant examples that directly address the mistakes in the input.
To make this approach possible, the researchers built a large-scale dataset of Chinese grammatical error explanations, named FCGEE (Fine-grained Chinese Grammatical Error Explanation). They used GPT-4o and data from the FCGEC dataset to generate detailed explanations, and further refined them with official exam materials. This dataset enables reliable retrieval of examples based on explanation similarity.

The RE² method was tested on two benchmark datasets, FCGEC and NaCGEC, which contain errors from primary and secondary school Chinese examinations. These errors are often challenging even for native speakers. By integrating RE² into both in-context learning (ICL) and supervised fine-tuning (SFT) frameworks, the researchers demonstrated significant improvements in correction accuracy compared with text-similarity-based methods. The results confirm that explanation-driven retrieval offers more effective guidance for large language models.
The team plans to expand the dataset to cover more error types and explore multilingual applications. They believe that explanation-based retrieval could make LLMs not only better at correcting grammar but also more interpretable and helpful for language learning.

DOI:10.1007/s11704-025-41399-w
Fichiers joints
  • Examples of retrieving reference examples
  • Overview of RE2 Method
23/01/2026 Frontiers Journals
Regions: Asia, China
Keywords: Applied science, Computing

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement