A flexible method for LoRA-based large language model fine-tuning
en-GBde-DEes-ESfr-FR

A flexible method for LoRA-based large language model fine-tuning

24/06/2025 Frontiers Journals

Parameter-Efficient Fine-Tuning (PEFT) methods aim to reduce the number of tuning parameters when applying Large Language Models (LLMs) to downstream tasks, which has drawn plenty of attention with the rapid development of LLMs. One of the representative methods is Low-Rank Adaption (LoRA), which decomposes incremental weights matrices ∆W ∈ ℝd×d into low-rank matrices A ∈ ℝr×d and B ∈ ℝd×r (where rd) as follows:

h = W0 + ∆Wx = W0 + BAx.

Despite the progress, LoRA still has some shortcomings. Firstly, it lacks a granular consideration of the relative importance and optimal rank allocation within the decomposed matrices A and B. Secondly, in multi-task fine-tuning scenarios, LoRA fails to account for the inherent varying rank requirements across different tasks.

To solve the above problem and improve the capability of LoRA-based fine-tuning, Kun Zhang with his team published their research on 15 May 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.

The team proposed to add more flexibility into the rank of A and B for LoRA-based fine-tuning performance improvement. Specifically, they first explored distinct rank

settings of A and B and designed a novel Enhanced Matrix Decomposition in single-task scenarios. By adding an additional matrix, we can assign different ranks to learning metrices to improve their flexibility as follows:

h = W0 + ∆Wx = W0 + B'TA'x,

where A' ∈ ℝa×d, B' ∈ ℝd×b, and T ∈ ℝb×a. Moreover, since {a,b,r} ≪ d, their proposed strategy does not increase the computational complexity.

For multi-task learning, they treated each rank in the LoRA module as an expert and then used a routing mechanism to select a suitable expert for each task to perform computations. Therefore, different tasks can used part of LoRA module to realize fine-tuning. Along this line, the capability of LoRA-based fine-tuning method can be enhanced in multi-task learning scenarios.

DOI: 10.1007/s11704-024-40317-w

Letter, Published: 15 May 2025

Dacao ZHANG, Fan YANG, Kun ZHANG, Xin LI, Si WEI, Richang HONG, Meng WANG. Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation. Front. Comput. Sci., 2025, 19(5): 195337, https://doi.org/10.1007/s11704-024-40317-w
Attached files
  • Fig 1. The overall diagram of the proposed method.
24/06/2025 Frontiers Journals
Regions: Asia, China
Keywords: Applied science, Computing

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonials

For well over a decade, in my capacity as a researcher, broadcaster, and producer, I have relied heavily on Alphagalileo.
All of my work trips have been planned around stories that I've found on this site.
The under embargo section allows us to plan ahead and the news releases enable us to find key experts.
Going through the tailored daily updates is the best way to start the day. It's such a critical service for me and many of my colleagues.
Koula Bouloukos, Senior manager, Editorial & Production Underknown
We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet

We Work Closely With...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by AlphaGalileo Terms Of Use Privacy Statement