Federated Learning (FL) allows for privacy-preserving model training by enabling clients to upload model gradients without exposing their personal data. However, the decentralized nature of FL introduces vulnerabilities to various attacks, such as poisoning attacks, where adversaries manipulate data or model updates to degrade performance. While current defenses often focus on detecting anomalous updates, they struggle with long-term attack dynamics, compromised privacy, and the underutilization of historical gradient data.
To solve these problems, a research team led by Cong Wang published their new research on 15 December 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The team proposed a new approach called Long-Short Historical Gradient Federated Learning (LSH-FL), using historical gradients to identify malicious model updates while mitigating the effects of poisoning attacks. The new defense framework is composed of two main components:
Perturbation Based on Short-Term Historical Gradients (P-SHG): This component introduces random noise into short-term gradients to disrupt the ability of attackers to hide within recent updates.
Defense Based on Long-Term Historical Gradients (D-LHG): This part aggregates long-term gradient trends to identify malicious clients and mitigate dynamic attack strategies.
The team introduces a novel Federated Learning defense strategy, LSH-FL, which enhances poisoning attack mitigation by leveraging historical gradient information. LSH-FL operates in a loop similar to classic FL methods, with four main steps: model synchronization, local model training, local model upload with perturbation, and model aggregation. Clients perform local training to generate short-term historical gradients (SHG), which are then perturbed using the P-SHG algorithm to meet differential privacy requirements. The central server applies the D-LHG algorithm to verify and aggregate the gradients, removing any abnormal client updates. This approach improves attack resilience while maintaining privacy and model accuracy.
In future work, the team also anticipates further enhancements to this defense strategy, including more sophisticated gradient sampling techniques and the integration of additional privacy-preserving mechanisms.
DOI:10.1007/s11704-025-40924-1