AI tools may be weakening the quality of published research, study warns
en-GBde-DEes-ESfr-FR

AI tools may be weakening the quality of published research, study warns


Artificial intelligence could be affecting the scientific rigour of new research, according to a study from the University of Surrey.

The research team has called for a range of measures to reduce the flood of "low-quality" and "science fiction" papers, including stronger peer review processes and the use of statistical reviewers for complex datasets.

In a study published in PLOS Biology, researchers reviewed papers proposing an association between a predictor and a health condition using an American government dataset called the National Health and Nutrition Examination Survey (NHANES), published between 2014 and 2024.

NHANES is a large, publicly available dataset used by researchers around the world to study links between health conditions, lifestyle and clinical outcomes. The team found that between 2014 and 2021, just four NHANES association-based studies were published each year – but this rose to 33 in 2022, 82 in 2023, and 190 in 2024.

Dr Matt Spick, co-author of the study from the University of Surrey, said:

“While AI has the clear potential to help the scientific community make breakthroughs that benefit society, our study has found that it is also part of a perfect storm that could be damaging the foundations of scientific rigour.

“We’ve seen a surge in papers that look scientific but don’t hold up under scrutiny – this is ‘science fiction’ using national health datasets to masquerade as science fact. The use of these easily accessible datasets via APIs, combined with large language models, is overwhelming some journals and peer reviewers, reducing their ability to assess more meaningful research – and ultimately weakening the quality of science overall.”

The study found that many post-2021 papers used a superficial and oversimplified approach to analysis – often focusing on single variables while ignoring more realistic, multi-factor explanations of the links between health conditions and potential causes. Some papers cherry-picked narrow data subsets without justification, raising concerns about poor research practice, including data dredging or changing research questions after seeing the results.

Tulsi Suchak, post-graduate researcher at the University of Surrey and lead author of the study, added:

“We’re not trying to block access to data or stop people using AI in their research – we’re asking for some common sense checks. This includes things like being open about how data is used, making sure reviewers with the right expertise are involved, and flagging when a study only looks at one piece of the puzzle. These changes don’t need to be complex, but they could help journals spot low-quality work earlier and protect the integrity of scientific publishing.”

To help tackle the issue, the team has laid out a number of practical steps for journals, researchers and data providers. They recommend that researchers use the full datasets available to them unless there’s a clear and well-explained reason to do otherwise, and that they are transparent about which parts of the data were used, over what time periods, and for which groups.

For journals, the authors suggest strengthening peer review by involving reviewers with statistical expertise and making greater use of early desk rejection to reduce the number of formulaic or low-value papers entering the system. Finally, they propose that data providers assign unique application numbers or IDs to track how open datasets are used – a system already in place for some UK health data platforms.

Anietie E Aliu, co-author of the study and post-graduate student at the University of Surrey, said:

“We believe that in the AI era, scientific publishing needs better guardrails. Our suggestions are simple things that could help stop weak or misleading studies from slipping through, without blocking the benefits of AI and open data. These tools are here to stay, so we need to act now to protect trust in research.”

Explosion of formulaic research articles, including inappropriate study designs and false discoveries, based on the NHANES US national health database; Matt Spick et al; PLOS Biology; 8 July 2025; 10.1371/journal.pbio.3003152
Regions: Europe, United Kingdom
Keywords: Science, Public Dialogue - science, Science Policy

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by DNN Corp Terms Of Use Privacy Statement