Researchers have developed a novel deep learning training technique called "Decreasing Precision with layer Capacity" (DPC) that can simultaneously improve the efficiency and optimization performance of modern deep neural networks (DNNs). The new method was found to reduce training costs by 16.21% to 44.37% while also increasing model accuracy by up to 0.68% on average.
To solve the problems, a research team led by Dongsheng Li published their
new research on 15 October 2025 in
Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
The remarkable performance of modern DNNs comes at a significant computational cost due to the extensive training data and parameters required. This has limited the practical application of DNNs, especially for resource-constrained edge devices. To address this challenge, the researchers systematically investigated the potential benefits of leveraging lower precision during DNN training.
The key innovation of the DPC technique is its ability to automatically determine the precision bound and vary the precision according to the capacity of individual model layers. This spatial adaptation of precision allows DPC to achieve a win-win in both training efficiency and model accuracy.
Experimental results validate the surprising effectiveness of DPC, demonstrating improvements in both efficiency and optimization performance. The consistency of these results across multiple trials underscores the method's stability and reliability.
By uncovering the benefits of low precision during training, this research paves the way for more efficient and optimized deep learning deployments. The researchers also provided visual insights into DPC’s underlying mechanisms through feature embedding analyses, contributing to a deeper understanding of low-precision training.
DOI:
10.1007/s11704-024-40669-3