New AI model boosts hyperspectral detail
en-GBde-DEes-ESfr-FR

New AI model boosts hyperspectral detail

23/04/2026 TranSpread

Hyperspectral remote sensing captures detailed information across many narrow wavelength bands, making it valuable for applications such as geological exploration, military reconnaissance, and precision agriculture. Yet this strength comes with a limitation: imaging hardware often cannot deliver both high spectral resolution and high spatial resolution at the same time. Traditional hyperspectral image super-resolution methods are often computationally expensive and depend heavily on prior assumptions, while recent deep learning models still struggle to balance local texture recovery, long-range dependency modeling, and spectral consistency. Based on these challenges, in-depth research is needed on hyperspectral image super-resolution.

Researchers from Sun Yat-sen University, Guangdong Polytechnic Normal University, and the University of Extremadura reported (DOI: 10.34133/remotesensing.1027) the new PLGMamba framework in Journal of Remote Sensing, published on March 17, 2026. The study presents a progressive local–global state-space model designed to reconstruct high-resolution hyperspectral images from low-resolution inputs without changing the imaging hardware, aiming to improve the accuracy of ground-cover interpretation in remote sensing systems.

The core innovation of PLGMamba lies in its progressive design. Instead of processing all spectral bands in one end-to-end stream, the model divides the low-resolution hyperspectral image into spectral groups and reconstructs them gradually. This strategy allows the network to better exploit local correlations between adjacent bands while still capturing broader spatial and spectral dependencies. The architecture includes 2 major modules: residual attention Mamba (RatMamba), which extracts local–global spectral–spatial features, and residual Mamba (ResMamba), which fuses these features into the final high-resolution output. In tests on the Chikusei, Houston, and Pavia datasets, as well as GF-5 data, PLGMamba outperformed classical, convolutional neural network (CNN)-based, Transformer-based, and other Mamba-based methods.

The authors designed PLGMamba to overcome 3 major problems in existing hyperspectral image super-resolution models: the limited receptive field of CNNs, the high computational burden of Transformers, and the insufficient spectral awareness of many end-to-end models. RatMamba combines a residual CNN, spectral attention (SA), and Mamba to recover grouped high-resolution features while maintaining spectral consistency. ResMamba then fuses the grouped outputs and models long-range dependencies more efficiently. The loss function jointly optimizes spectral–spatial fidelity, spectral similarity, and spatial fidelity.

In the Chikusei scene, PLGMamba reached a peak signal-to-noise ratio (PSNR) of 44.058, a spectral angle mapping (SAM) value of 1.3404, and a relative error of global synthesis (ERGAS) of 10.069 at the ×2 scale factor, outperforming all comparison methods. At the ×4 scale factor on the Houston scene, it achieved a PSNR of 39.804, SAM of 2.9186, and ERGAS of 11.015. On real-world GF-5 imagery, it also produced the best no-reference performance, with quality with no reference (QNR), spectral distortion index (D_s), and spatial distortion index (D_l) values reported as 0.9620, 0.0167, and 0.0217, respectively. The study also found that using 10 spectral groups gave the best overall performance when transferring the model to a new scenario.

According to the paper, the method’s advantage is threefold: it progressively captures spectral–spatial features from local to global scales, integrates CNNs and spectral attention into RatMamba to preserve detail and spectral information, and uses ResMamba to fuse grouped high-resolution features while reducing distortion. Together, these elements explain why the model consistently delivered stronger reconstruction quality across datasets.

The team trained the model in PyTorch using the Adam optimizer for 200 epochs with a minibatch size of 12 on an NVIDIA RTX 3060 graphics processing unit (GPU). They evaluated PLGMamba on 3 public hyperspectral datasets—Chikusei, Houston, and Pavia—and on GF-5 satellite data. Performance was measured using spectral angle mapping (SAM), peak signal-to-noise ratio (PSNR), and relative error of global synthesis (ERGAS), while GF-5 experiments also used no-reference quality metrics.

The authors suggest that future work will focus on improving performance at the ×8 scale factor and enabling terminal deployment of lightweight hyperspectral image super-resolution models. If successful, such advances could strengthen remote sensing across agriculture, environmental monitoring, resource exploration, and other fields where sharper hyperspectral imagery can improve decision-making without requiring more complex hardware systems.

###

References

DOI

10.34133/remotesensing.1027

Original Source URL

https://doi.org/10.34133/remotesensing.1027

Funding information

This work is supported by the National Natural Science Foundation of China under Grant No. 42271325, the National Key Research and Development Program of China under Grant No. 2020YFA0714103, Fundamental Research Funds for the Central Universities, Sun Yat-sen University under Grant No. 24lgqb002, the Innovation Group Project of Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) under Grant No. 311022018, Key Construction Discipline Scientific Research Capability Promotion Project of Guangdong Province under Grant No. 2022ZDJS015, Dazhi Leading Talent Project of Guangdong Polytechnic Normal University (GPNU), and Guangdong Provincial Key Laboratory of Intellectual Property & Big Data under Grant No. 2018B030322016.

About Journal of Remote Sensing

The Journal of Remote Sensing, an online-only Open Access journal published in association with AIR-CAS, promotes the theory, science, and technology of remote sensing, as well as interdisciplinary research within earth and information science.

Paper title: PLGMamba: A New Progressive Local–Global State-Space Model for Hyperspectral Image Super-Resolution
Fichiers joints
  • Overall architecture of the proposed PLGMamba for the HSI-SR task.
23/04/2026 TranSpread
Regions: North America, United States, Asia, China
Keywords: Science, Space Science

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement