Enhancing Poisoning Attack Mitigation in Federated Learning through Perturbation-Defense Complementarity on Historical Gradients
en-GBde-DEes-ESfr-FR

Enhancing Poisoning Attack Mitigation in Federated Learning through Perturbation-Defense Complementarity on Historical Gradients

23/01/2026 Frontiers Journals

Federated Learning (FL) allows for privacy-preserving model training by enabling clients to upload model gradients without exposing their personal data. However, the decentralized nature of FL introduces vulnerabilities to various attacks, such as poisoning attacks, where adversaries manipulate data or model updates to degrade performance. While current defenses often focus on detecting anomalous updates, they struggle with long-term attack dynamics, compromised privacy, and the underutilization of historical gradient data.

To solve these problems, a research team led by Cong Wang published their new research on 15 December 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team proposed a new approach called Long-Short Historical Gradient Federated Learning (LSH-FL), using historical gradients to identify malicious model updates while mitigating the effects of poisoning attacks. The new defense framework is composed of two main components:

Perturbation Based on Short-Term Historical Gradients (P-SHG): This component introduces random noise into short-term gradients to disrupt the ability of attackers to hide within recent updates.

Defense Based on Long-Term Historical Gradients (D-LHG): This part aggregates long-term gradient trends to identify malicious clients and mitigate dynamic attack strategies.

The team introduces a novel Federated Learning defense strategy, LSH-FL, which enhances poisoning attack mitigation by leveraging historical gradient information. LSH-FL operates in a loop similar to classic FL methods, with four main steps: model synchronization, local model training, local model upload with perturbation, and model aggregation. Clients perform local training to generate short-term historical gradients (SHG), which are then perturbed using the P-SHG algorithm to meet differential privacy requirements. The central server applies the D-LHG algorithm to verify and aggregate the gradients, removing any abnormal client updates. This approach improves attack resilience while maintaining privacy and model accuracy.

In future work, the team also anticipates further enhancements to this defense strategy, including more sophisticated gradient sampling techniques and the integration of additional privacy-preserving mechanisms.

DOI:10.1007/s11704-025-40924-1
Archivos adjuntos
  • The processing flow of P-SHG
  • The processing flow of D-LHG
23/01/2026 Frontiers Journals
Regions: Asia, China
Keywords: Applied science, Computing

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonios

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Trabajamos en estrecha colaboración con...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement