Researchers Found a Better Way to Teach Large Language Models New Skills
en-GBde-DEes-ESfr-FR

Researchers Found a Better Way to Teach Large Language Models New Skills


Researchers have developed a technique that significantly improves the performance of large language models without increasing the computational power necessary to fine-tune the models. The researchers demonstrated that their technique improves the performance of these models over previous techniques in tasks including commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.

Large language models are artificial intelligence systems that are pretrained on huge data sets. After pretraining, these models predict which words should follow each other in order to respond to user queries. However, the nonspecific nature of pretraining means that there is ample room for improvement with these models when the user queries are focused on specific topics, such as when a user requests the model to answer a math question or to write computer code.

“In order to improve a model’s ability to perform more specific tasks, you need to fine-tune the model,” says Tianfu Wu, co-corresponding author of a paper on the work and an associate professor of computer engineering at North Carolina State University. “However, these models are so large that it is not feasible to re-train the entire model. Instead, you want to determine the smallest number of changes necessary to improve the model’s performance. We’ve developed a technique, called WeGeFT (pronounced wee-gift), that represents a significant advance for fine-tuning these large models.”

The big break-through for fine-tuning these large models was called LoRA, which came out in 2022. LoRA works by using mathematical tools to identify a small subset of key parameters that are most likely to improve a model’s performance on a specific task. There have been many attempts to improve upon LoRA, but Wu and his collaborators found these previous efforts either required significantly more computational power to improve performance, or used the same amount of computing power without improving performance.

“WeGeFT builds on LoRA, but incorporates additional mathematical tools that allow us to determine which of the key parameters the model is already familiar with and which parameters the model would need to ‘learn,’” says Wu. “By placing more weight on the truly novel parameters, we are able to improve model performance compared to LoRA without incorporating significant new computational demands.”

In proof-of-concept testing, the researchers found that WeGeFT performed as well as or better than LoRA and its many variants across a variety of downstream tasks: commonsense reasoning, arithmetic reasoning, instruction following, code generation, and visual recognition.

“We think this is a valuable step forward,” Wu says. “We are now exploring ways that WeGeFT could also be used to identify elements of the model that are responsible for harmful outputs, with the goal of improving AI alignment and ‘surgery’ to improve model safety and outputs. We expect that work to be forthcoming.”

The paper, “WeGeFT: Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models,” will be presented July 17 at the International Conference on Machine Learning, being held in Vancouver, Canada. Co-corresponding author of the paper is Chinmay Savadikar, a Ph.D. student at NC State. The paper was co-authored by Xi Song, an independent researcher.

This work was done with support from the National Science Foundation under grants 1909644, 2024688 and 2013451; and from the Army Research Office under grants W911NF1810295 and W911NF2210010.

“WeGeFT: Weight-Generative Fine-Tuning for Multi-Faceted Efficient Adaptation of Large Models”

Authors: Chinmay Savadikar and Tianfu Wu, North Carolina State University; Xi Song, independent researcher

Presented: July 13-19, International Conference on Machine Learning, Vancouver, Canada
Regions: North America, United States, Canada
Keywords: Applied science, Artificial Intelligence, Computing, Technology

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonials

For well over a decade, in my capacity as a researcher, broadcaster, and producer, I have relied heavily on Alphagalileo.
All of my work trips have been planned around stories that I've found on this site.
The under embargo section allows us to plan ahead and the news releases enable us to find key experts.
Going through the tailored daily updates is the best way to start the day. It's such a critical service for me and many of my colleagues.
Koula Bouloukos, Senior manager, Editorial & Production Underknown
We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet

We Work Closely With...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by AlphaGalileo Terms Of Use Privacy Statement