From time series to a single snapshot: A smarter way to track wheat growth in real time
en-GBde-DEes-ESfr-FR

From time series to a single snapshot: A smarter way to track wheat growth in real time

06/01/2026 TranSpread

By transferring temporal knowledge from complex time-series models to a compact model through knowledge distillation and attention mechanisms, the approach achieves high accuracy while greatly reducing data and computational demands. This enables real-time, field-ready wheat phenology monitoring suitable for practical agricultural deployment.

Traditional wheat phenology monitoring relies heavily on manual field observation, which is labor-intensive, subjective, and unsuitable for large-scale or continuous monitoring. Vegetation indices derived from RGB or multispectral imagery offer partial automation but struggle to distinguish visually similar growth stages and often require expert calibration and long time-series data. Deep learning has improved automation and accuracy by extracting rich visual features directly from images, yet most single-image models fail to capture the dynamic nature of crop growth. Multi-temporal deep learning models address this limitation but introduce new challenges, including large model size, high energy consumption, complex data pipelines, and poor real-time performance—especially on resource-constrained edge devices. These trade-offs have limited their practical adoption in everyday farming.

A study (DOI: 10.1016/j.plaphe.2025.100144) published in Plant Phenomics on 4 December 2025 by Xiaohu Zhang’s team, Nanjing Agricultural University, enables efficient, real-time wheat phenology detection suitable for practical field deployment.

The study adopted a framework to evaluate a lightweight wheat phenology detection model optimized for single-temporal images through knowledge distillation and multi-layer attention transfer. Model training and evaluation were conducted on a high-performance computing server equipped with dual Intel Xeon CPUs, seven NVIDIA Tesla A100 GPUs, and large-memory support, ensuring stable and efficient deep learning optimization. The backpropagation algorithm was used for parameter learning, with the Adam optimizer selected to balance convergence speed and model performance, while dropout regularization was introduced to reduce overfitting and enhance generalization. Training was performed using a batch size of 16, a learning rate of 0.0001, and a dropout rate of 0.3. Model performance was comprehensively assessed using multiple complementary metrics, including confusion matrices to analyze class-level predictions, overall accuracy (OA), F1-score, kappa coefficient, and mean absolute error (MAE), enabling a robust evaluation of both classification accuracy and consistency across phenological stages. Based on this methodology, the proposed model achieved strong performance, with an OA of 0.927, MAE of 0.075, F1-score of 0.929, and kappa coefficient of 0.916, demonstrating accuracy comparable to complex multi-temporal models despite using only single images. When benchmarked against widely used deep learning architectures such as ResNet50, MobileNetV3, EfficientNetV2, RepVGG, SCNet, STViT, and PhenoNet under identical training conditions, the proposed method consistently outperformed all comparators, achieving accuracy gains ranging from 2.5% to 17.5%. Notably, the lightweight student model exhibited only a 0.8% reduction in accuracy relative to its multi-temporal teacher model, while substantially reducing computational cost. Confusion matrix analysis showed a pronounced diagonal structure, indicating reduced misclassification across eight reproductive stages, particularly for visually ambiguous middle stages such as jointing, booting, anthesis, and flowering. Furthermore, evaluation on an unseen second-year dataset confirmed strong generalization ability, with an OA of 0.917 and stable performance across varying lighting conditions, wheat varieties, and field scenes, underscoring the model’s robustness and suitability for real-time agricultural deployment.

By requiring only a single image for inference, the proposed model dramatically reduces data storage needs, computational cost, and inference time. The lightweight student model processes images at real-time speeds suitable for on-farm deployment, including integration with field cameras, drones, or low-power edge devices. This capability makes accurate wheat phenology monitoring accessible to smallholder farmers and large-scale operations alike, without dependence on continuous image collection or auxiliary data such as weather records.

###

References

DOI

10.1016/j.plaphe.2025.100144

Original Source URl

https://doi.org/10.1016/j.plaphe.2025.100144

Funding information

This research was supported by the National Key Research and Development Program of China (2024YFD2301100), the National Natural Science Foundation of China (Grant No. 32171892), the Qing Lan Project of Jiangsu Universities, and Jiangsu Agricultural Science and Technology Innovation Fund (CX (21) 1006).

About Plant Phenomics

Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.

Title of original paper: WPDSI: A deep learning method for wheat phenology detection from single-temporal images
Authors: Yan Li a b, Yucheng Cai a b, Xuerui Qi a b, Suyi Liu a b, Xiangxin Zhuang a b, Hengbiao Zheng a b, Yongchao Tian a c, Yan Zhu a b, Weixing Cao a b, Xiaohu Zhang a b c
Journal: Plant Phenomics
Original Source URL: https://doi.org/10.1016/j.plaphe.2025.100144
DOI: 10.1016/j.plaphe.2025.100144
Latest article publication date: 4 December 2025
Subject of research: Not applicable
COI statement: The authors declare that they have no competing interests.
Fichiers joints
  • Figure 2 Technical framework.
06/01/2026 TranSpread
Regions: North America, United States, Asia, China
Keywords: Applied science, Engineering

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement