New Defense Strategy for Federated Learning, Capping Accuracy Loss at 0.47%
en-GBde-DEes-ESfr-FR

New Defense Strategy for Federated Learning, Capping Accuracy Loss at 0.47%

24/07/2025 Frontiers Journals

Researchers at Beihang University, in collaboration with the Beijing Zhongguancun Laboratory, have developed a new defense strategy called Long-Short Historical Gradient Federated Learning (LSH-FL), which maintains accuracy losses from attacks below 1% on key benchmarks. This approach directly tackles the risk of malicious clients hijacking decentralized model training.
New Defense Shields Federated Learning from Poisoning Threats in Healthcare, Autonomous Vehicles & Finance
Federated learning enables devices—such as smartphones or medical sensors—to train models collaboratively without sharing raw data; however, it is vulnerable to “poisoning” attacks that send malicious updates to the server. Such attacks pose a threat to applications in healthcare diagnostics, autonomous vehicles, and finance. By making federated learning more robust, LSH-FL can help ensure safer, more trustworthy AI for both industry and consumers.
New Model Caps MNIST Accuracy Drop at 0.47% and Keeps CIFAR-10 Loss Under 4% Even with 50% Attackers
The experiments produced the following clear outcomes:
  • On the MNIST handwriting dataset, LSH-FL limited the drop in model accuracy to just 0.47% under label-flip attacks, compared to a loss of over 3% with prior defenses (e.g., Multi-Krum, Trim, FABA).
  • For the CIFAR-10 image set, even when half of all participants attempted to corrupt the model, LSH-FL maintained accuracy losses under 4%, outperforming methods that failed when the attacker numbers exceeded 40% (e.g., Krum and Multi-Krum).
  • Across four popular benchmarks, including CIFAR-100 and Fashion-MNIST, LSH-FL consistently reduced the impact of five common poisoning strategies more effectively than six state-of-the-art defenses.
Novel Two-Pronged Approach Uses Randomized Tweaks and Gradient History to Sniff Out Malicious Updates
To develop LSH-FL, the researchers combined two complementary strategies that work together to defend against poisoning while preserving privacy. First, they introduced short-term perturbations by adding minor, randomized adjustments to each client’s latest model updates; this makes it difficult for attackers to blend malicious changes with legitimate contributions. Second, they implemented long-term detection by maintaining a lightweight history of past updates and identifying patterns that deviate from normal behavior, allowing the system to flag and discard suspicious inputs. All experiments were conducted in standard federated learning environments, without accessing raw data, and tested under realistic network conditions to ensure the approach remains practical and efficient.
“By combining short-term perturbations with long-term gradient history, we’ve found a practical way to keep federated learning both accurate and secure—even when half the participants turn malicious,” said Prof. Zhilong Mi.
Potential Solution Delivers <1% Accuracy Loss and Privacy-Preserving Security for Distributed AI
LSH-FL provides a practical, low-overhead approach to hardening federated learning against malicious participants without compromising accuracy or privacy. As industries increasingly rely on decentralized AI, this approach could become a key component in deploying safe and reliable distributed learning systems. The full research article was published in Frontiers of Computer Science in May 2025 (https://doi.org/10.1007/s11704-025-40924-1).
DOI: 10.1007/s11704-025-40924-1
24/07/2025 Frontiers Journals
Regions: Asia, China
Keywords: Applied science, Computing

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonials

For well over a decade, in my capacity as a researcher, broadcaster, and producer, I have relied heavily on Alphagalileo.
All of my work trips have been planned around stories that I've found on this site.
The under embargo section allows us to plan ahead and the news releases enable us to find key experts.
Going through the tailored daily updates is the best way to start the day. It's such a critical service for me and many of my colleagues.
Koula Bouloukos, Senior manager, Editorial & Production Underknown
We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet

We Work Closely With...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by AlphaGalileo Terms Of Use Privacy Statement