Weakly Supervised Deep Learning Boosts Precision Agriculture
en-GBde-DEes-ESfr-FR

Weakly Supervised Deep Learning Boosts Precision Agriculture

01/09/2025 TranSpread

This approach enables efficient, accurate segmentation of field vegetation, offering significant benefits for precision agriculture and sustainable food production.

As the world’s population edges toward 8.5 billion by 2030, the demand for food continues to surge. Meeting this need requires better tools for monitoring crop growth under increasingly variable environmental conditions. Monitoring crops accurately in real-world fields remains a central challenge in global food security. Manual field assessments are slow, costly, and difficult to scale. While remote sensing and computer vision technologies have revolutionized agricultural monitoring, spectral image acquisition remains expensive, and traditional methods often require extensive data annotation. These challenges hinder widespread adoption of advanced monitoring solutions. Based on these limitations, researchers sought to develop a more cost-effective, scalable, and automated strategy for vegetation segmentation in agricultural fields.

A study (DOI: 10.1016/j.plaphe.2025.100021) published in Plant Phenomics on 1 March 2025 by Qi Wang ’s team, Guizhou University, demonstrates that integrating spectral reconstruction with weakly supervised segmentation offers a low-cost and highly accurate solution for real-world vegetation monitoring in precision agriculture.

To evaluate the effectiveness of the proposed vegetation segmentation methods, researchers first applied spectral reconstruction (SR) models to field data and compared them against existing approaches under challenging “strong noise” conditions. Using SRCNet and SRANet, they demonstrated that their methods produced superior reconstruction quality, particularly in preserving structural details, and maintained consistent performance across diverse field scenes, underscoring their robustness and generalization. Once the reconstructed spectral data were obtained, they were fused using a vegetation index (VI)-based strategy and segmented without manual labels. The segmentation results outperformed other deep learning methods as well as the widely used Segment Anything Model (SAM), showing higher accuracy and finer detail recognition in both dense and uniform vegetation distributions. Further testing on full-size field images confirmed that SRCNet could accurately delineate vegetation across complex agricultural landscapes. Quantitative analysis revealed that the attention-based reconstruction model surpassed alternatives on metrics such as MRAE, SSIM, and PSNR, while segmentation performance achieved a mean intersection over union (MIoU) of 0.853, validating the approach’s suitability for downstream agricultural applications. Comparative evaluations also showed that the VI-based fusion strategy delivered more generalizable results than weight-based methods, while sensitivity tests revealed that early training epochs yielded better segmentation outcomes by minimizing the impact of strong noise. Ablation experiments confirmed the critical role of spectral reconstruction, multispectral data, and VI features in enhancing segmentation accuracy. Finally, validation on the external FloodNet dataset demonstrated the model’s adaptability to varied environments beyond crop fields. Collectively, these experiments establish that the integration of SR techniques with weakly supervised segmentation provides a cost-effective, high-performance solution for field vegetation monitoring, combining technical rigor with strong applicability in precision agriculture.

The proposed method has wide-ranging applications in smart farming and environmental monitoring. By reducing reliance on expensive spectral sensors and bypassing manual annotation, it lowers barriers for deploying advanced monitoring tools in resource-limited settings. The resulting vegetation maps can be used to calculate fractional vegetation cover, assess plant health, monitor growth dynamics, and even forecast disease or pest outbreaks. These insights empower farmers to make timely interventions, optimize resource use, and boost yields sustainably.

###

References

DOI

10.1016/j.plaphe.2025.100021

Original URL

https://doi.org/10.1016/j.plaphe.2025.100021

Funding information

This research was supported by National Key R&D Program of China (2024YFD2001100, 2024YFE0214300), National Natural Science Foundation of China (No. 62162008), Guizhou Provincial Science and Technology Projects ( [2024]002, CXTD[2023]027), Guizhou Province Youth Science and Technology Talent Project ([2024]317), Guiyang Guian Science and Technology Talent Training Project ([2024] 2-15). The Talent Introduction Program of Guizhou University under Grant No. (2021)89.

About Plant Phenomics

Science Partner Journal Plant Phenomics is an online-only Open Access journal published in affiliation with the State Key Laboratory of Crop Genetics & Germplasm Enhancement, Nanjing Agricultural University (NAU) and distributed by the American Association for the Advancement of Science (AAAS). Like all partners participating in the Science Partner Journal program, Plant Phenomics is editorially independent from the Science family of journals. Editorial decisions and scientific activities pursued by the journal's Editorial Board are made independently, based on scientific merit and adhering to the highest standards for accurate and ethical promotion of science. These decisions and activities are in no way influenced by the financial support of NAU, NAU administration, or any other institutions and sponsors. The Editorial Board is solely responsible for all content published in the journal. To learn more about the Science Partner Journal program, visit the SPJ program homepage.

Title of original paper: Segmenting vegetation from UAV images via spectral reconstruction in complex field environments
Authors: Zhixun Pei a, Xingcai Wu a, Xue Wu b, Yuanyuan Xiao a, Peijia Yu a, Zhenran Gao d, Qi Wang a, Wei Guo c
Journal: Plant Phenomics
Original Source URL: https://doi.org/10.1016/j.plaphe.2025.100021
DOI: 10.1016/j.plaphe.2025.100021
Latest article publication date: 1 March 2025
Subject of research: Not applicable
COI statement: The authors declare that they have no competing interests.
Archivos adjuntos
  • Figure 4. The main flowchart of our study, including the preprocessing of data, comparison between spectral reconstruction models, and application of the reconstructed data. SI′represents the reconstructed spectral image.(a) Data processing including alignment, cropping, and other operations on the raw data to make it applicable to SR tasks. (b) Adaptation to data in multiple scenarios by using various types of SR models. In which only the training process requires the participation of spectral data. (c) Processing of SR results for application to relevant tasks.
01/09/2025 TranSpread
Regions: Asia, China, North America, United States
Keywords: Applied science, Engineering

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonios

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Trabajamos en estrecha colaboración con...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by DNN Corp Terms Of Use Privacy Statement