ProbsCut: enhancing adversarial robustness via global probability constraints
en-GBde-DEes-ESfr-FR

ProbsCut: enhancing adversarial robustness via global probability constraints

24.04.2026 HEP Journals

Deep neural networks (DNNs) are demonstrated to be vulnerable to adversarial examples. Adversarial training is mainstrem method to improve adversarial robustness of DNNs, which augments the training set with adversarial examples and adopts adversarial regularization loss to improve the robustness of DNNs. Existing adversarial training methods are facing the challenge to balance the accuracy and robustness.

To alleviate the issues, a research team led by Yun Li published their new research on 15 April 2026 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
In the research, the team employ bias-variance decomposition to analyze the generalization error in the adversarial setting. In this way, exploring the trade-off between accuracy and robustness can be turned into solving optimal expectations. Furthermore, the proposed method, namely Probscut, consists of two loss items: global loss and local loss. Global loss concerns the relationship among different examples, while local loss constrains the loss corresponding to every single example. The global loss can be combined with existing methods, such as TRADES and MART. The work transforms the heuristic search for a trade-off between accuracy and robustness into an exploration of the optimal expected probability for each category by introducing a variance-bias decomposition of the generalization error, specifically tailored to adversarial settings.

In detail, the local loss is the single-element Kullback–Leibler divergence, whose inputs have only one element, which reduces the difference between the probability of target category and global optimal expected probability for all examples, being viewed first-order moment estimation to reduce variance. While the local loss aligns the probability vectors of each legitimate example and their corresponding adversarial example. The optimal expected probability of each category is simultaneously updated with model parameters.

In future work, more efficient methods for determining the optimal expected probability for each category will be explored.
DOI:10.1007/s11704-025-41225-3
Angehängte Dokumente
  • Figure 1. Workflow of ProbsCut.
24.04.2026 HEP Journals
Regions: Asia, China
Keywords: Applied science, Computing

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Referenzen

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Wir arbeiten eng zusammen mit...


  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement