The need is hard to ignore. The paper notes that the number of systematic reviews indexed in PubMed rose from an estimated 1,432 per year in 2000 to 29,073 in 2019. That explosive growth has expanded access to synthesized evidence, but it has also fueled duplication, redundancy, and inconsistency across reviews addressing the same question. Existing tools such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020, A Measurement Tool to Assess Systematic Reviews (AMSTAR) 2, and Risk of Bias in Systematic Reviews (ROBIS) can evaluate reporting quality, rigor, and bias, yet none was designed specifically to measure whether one review substantially duplicates another. Based on these challenges, there is a need to carry out in-depth research on how duplication in systematic reviews can be systematically identified and assessed.
Researchers from the Affiliated Traditional Chinese Medicine Hospital of Guangzhou Medical University, Lanzhou University, and Hong Kong Baptist University reported (DOI: 10.26599/eCMTA.2026.9570031) in 2026 in Evidence-Based Chinese Medicine and Technology Assessment a protocol for developing the systematic review duplication tool. The article was received on December 16, 2025, revised on February 18, 2026, and accepted on March 20, 2026. The protocol lays out a plan to create and validate a standardized instrument for comparing pairs of intervention-based systematic reviews that address the same disease or condition, with the goal of identifying whether their overlap reflects meaningful replication or avoidable redundancy.
The proposed design is ambitious and deliberately practical. The team will build the tool in three phases: preparatory work, tool development, and dissemination. At the center is a four-domain framework covering research topic, research methods, research results, and methodological quality. Instead of forcing a simple yes-or-no decision, the tool is intended to generate a qualitative duplication profile showing where two reviews truly converge and where they differ in important ways. Development will include literature-based item generation, pilot testing with 40 systematic reviews across 17 disease categories, a two-round modified Delphi process with expert input, consensus meetings, and reliability testing. The final product is expected to be available in both web-based and Excel-based versions, including a full version for in-depth assessment and a simplified version for rapid screening.
“Medicine does not necessarily need more reviews. It needs reviews that are genuinely needed.” That is the larger message carried by this protocol. Framed that way, the proposed tool reads less like a technical checklist and more like a gatekeeper for a crowded evidence landscape. It points toward a future in which novelty is judged more transparently, duplication is identified earlier, and new evidence syntheses are expected to prove not only that they are possible, but that they are worth doing.
If validated as planned, the tool could shape the full life cycle of evidence synthesis. Researchers could use it before launching a review to determine whether a meaningful gap remains. Editors and peer reviewers could apply it when judging novelty and contribution. Guideline developers and health technology assessment teams could use it to choose among overlapping reviews more confidently. The authors also suggest that a structured duplication framework may be especially valuable in areas with heterogeneous interventions and outcome measures, where repeated reviews can fragment rather than strengthen the evidence base. In that sense, the SRD tool is positioned as both a methodological innovation and a practical step toward a leaner, clearer, and more trustworthy research system.
###
References
DOI
10.26599/eCMTA.2026.9570031
Original Source URL
https://www.sciopen.com/article/10.26599/eCMTA.2026.9570031
Funding Information
The study was sponsored by the Project of Administration of Traditional Chinese Medicine of Guangdong Province (grant no.: 20251280), Guangzhou Health Science and Technology Project (grant no.: 20242A011005), Guangzhou Science and Technology Fund (grant nos.: 2024A03J0791, 2025A03J3510, 2025A03J3512, and 2025A03J3428), Guangzhou Key Science and Technology Project of TCM (grant no.: 2025ZD010), Plan on Enhancing Scientific Research in GMU (grant nos.: GMUCR 2024-01018 and GMUCR2024-02030), and the Young Scientific and Technological Talents Research Project of the Affiliated Traditional Chinese Medicine Hospital (grant nos.: 2022RC07, 2024SZYRC13 and 2024SZYRC15).
About Evidence-Based Chinese Medicine and Technology Assessment
Evidence-Based Chinese Medicine and Technology Assessment (eCMTA) is a journal focused on strengthening the scientific foundation of Chinese medicine and integrative healthcare. It publishes multidisciplinary research that supports better clinical, public health, and policy decision-making, covering topics such as evidence synthesis, guideline development, technology assessment, health economics, clinical trials, and real-world studies. The journal places particular emphasis on building reliable evidence for Chinese medicine through modern research methods and transparent evaluation standards. It also maintains rigorous peer review and follows recognized publishing ethics principles in safeguarding research integrity. Published by Tsinghua University Press in collaboration with Beijing University of Chinese Medicine, the journal serves as a platform for bridging traditional medical knowledge with contemporary evidence-based practice.