New Approach Finds Privacy Vulnerability and Performance Are Intertwined in AI Neural Networks
en-GBde-DEes-ESfr-FR

New Approach Finds Privacy Vulnerability and Performance Are Intertwined in AI Neural Networks


Researchers have discovered that some of the elements of AI neural networks that contribute to data-privacy vulnerabilities are also key to the performance of those models. The researchers used this new information to develop a technique that better balances performance and privacy protection in these models.

The findings involve protecting neural networks against membership inference attacks (MIAs), which are techniques that allow attackers to determine whether a particular piece of data was used to train a specific AI model.

“MIAs can jeopardize the privacy of individuals whose data was part of the training dataset,” says Xingli Fang, first author of a paper on the work and a Ph.D. student at North Carolina State University. “For example, if an attacker has partial data from an individual, it could use an MIA to determine if an AI model was trained using data from that individual.”

“And if the individual’s data was used to train that model, the attacker could then infer the rest of the user’s information,” says Jung-Eun Kim, corresponding author of the paper and an assistant professor of computer science at NC State. “Basically, MIAs pose a privacy vulnerability.”

To understand what the researchers learned, you have to understand “weight parameters.” Weight parameters are an important component of AI neural networks, such as large language models. Essentially, weight parameters serve as the synapses that link all of the neurons in the model together, and data inputs travel through these weight parameters as the model takes the data and produces an output.

“When we started this project, we wanted to get a better understanding of which weight parameters in a model are most important for protecting privacy and which weight parameters are most important for performance,” says Kim. “It was fundamental AI research.”

“We found that only a few weight parameters represent a significant privacy vulnerability,” says Fang. “However, we were surprised to learn that the vulnerable weight parameters are also among the most important weight parameters when it comes to performance. This means it is extremely difficult to reduce vulnerability risk without also hurting performance.

“However, we were able to use our new insights to develop a novel approach for improving data privacy by modifying the weight parameters and going through a fine-tuning process to adjust the model.”

To test the new approach, the researchers compared their privacy protection technique to four other techniques to see how they performed when defending against two state-of-the-art MIAs.

“We found that our approach achieves a better balance of privacy and performance relative to the previous techniques,” says Kim. “We’re happy to talk with anyone in the field about how to incorporate this approach into their training.”

The paper, “Learnability and Privacy Vulnerability Are Entangled in a Few Critical Weights,” will be presented at the Fourteenth International Conference on Learning Representations (ICLR2026), being held April 23-27 in Rio de Janeiro, Brazil.

“Learnability and Privacy Vulnerability Are Entangled in a Few Critical Weights”

Authors: Xingli Fang and Jung-Eun Kim, North Carolina State University

Presented: April 23-27, the Fourteenth International Conference on Learning Representations (ICLR2026), Rio de Janeiro, Brazil
Regions: North America, United States
Keywords: Applied science, Artificial Intelligence, Computing

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Referenzen

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Wir arbeiten eng zusammen mit...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement