Next-generation vision model maps tree growth at sub-meter precision
en-GBde-DEes-ESfr-FR

Next-generation vision model maps tree growth at sub-meter precision

22/12/2025 TranSpread

Monitoring forest canopy structure is essential for understanding global carbon cycles, assessing tree growth, and managing plantation resources. Traditional lidar systems provide accurate height data but are limited by high costs and technical complexity, while optical remote sensing often lacks the structural precision required for small-scale plantations. Deep learning methods have improved canopy estimation but still demand massive labeled datasets and often lose fine spatial details. Moreover, global models struggle to adapt to fragmented plantation landscapes with uniform tree structures. Due to these challenges, developing a cost-effective, high-resolution, and generalizable approach for mapping canopy height and biomass has become an urgent research priority.

A joint research team from Beijing Forestry University, Manchester Metropolitan University, and Tsinghua University has developed a new artificial intelligence (AI)-driven vision model that delivers sub-meter accuracy in estimating tree heights from RGB satellite images. Published (DOI: 10.34133/remotesensing.0880) in the Journal of Remote Sensing on October 20, 2025, the study introduces a novel framework that combines large vision foundation models (LVFMs) with self-supervised learning. The approach addresses the long-standing problem of balancing cost, precision, and scalability in forest monitoring—offering a promising tool for managing plantations and tracking carbon sequestration under initiatives such as China’s Certified Emission Reduction program.

The researchers created a canopy height estimation network composed of three modules: a feature extractor powered by the DINOv2 large vision foundation model, a self-supervised feature enhancement unit to retain fine spatial details, and a lightweight convolutional height estimator. The model achieved a mean absolute error of only 0.09 m and an R² of 0.78 when compared with airborne lidar measurements, outperforming traditional CNN and transformer-based methods. It also enabled over 90 % accuracy in single-tree detection and strong correlations with measured above-ground biomass (AGB). Beyond its accuracy, the model demonstrated strong generalization across forest types, making it suitable for both regional and national-scale carbon accounting.

The model was tested in the Fangshan District of Beijing, an area with fragmented plantations primarily composed of Populus tomentosa, Pinus tabulaeformis, and Ginkgo biloba. Using one-meter-resolution Google Earth imagery and lidar-derived references, the AI model produced canopy height maps closely matching ground truth data. It significantly outperformed global CHM products, capturing subtle variations in tree crown structure that existing models often missed. The generated maps supported individual-tree segmentation and plantation-level biomass estimation with R² values exceeding 0.9 for key species. Moreover, when applied to a geographically distinct forest in Saihanba, the network maintained robust accuracy, confirming its cross-regional adaptability. The ability to reconstruct annual growth trends from archived satellite imagery provides a scalable solution for long-term carbon sink monitoring and precision forestry management. This innovation bridges the gap between expensive lidar surveys and low-resolution optical methods, enabling detailed forest assessment with minimal data requirements.

“Our model demonstrates that large vision foundation models (LVFMs) can fundamentally transform forestry monitoring,” said Dr. Xin Zhang, corresponding author at Manchester Metropolitan University. “By combining global image pretraining with local self-supervised enhancement, we achieved lidar-level precision using ordinary RGB imagery. This approach drastically reduces costs and expands access to accurate forest data for carbon accounting and environmental management.”

The team employed an end-to-end deep-learning framework combining pre-trained LVFM features with a self-supervised enhancement process. High-resolution Google Earth imagery (2013–2020) was used as input, and UAV-based lidar data served as reference for training and validation. The model was implemented in PyTorch and trained using the fastai framework on an NVIDIA RTX A6000 GPU. Comparative experiments with conventional networks (U-Net and DPT) and global CHM datasets confirmed superior accuracy and efficiency, validating the model’s potential for scalable canopy height mapping and biomass estimation.

The AI-based mapping framework offers a powerful and affordable approach for tracking forest growth, optimizing plantation management, and verifying carbon credits. Its adaptability across ecosystems makes it suitable for global afforestation and reforestation monitoring programs. Future research will extend this method to natural and mixed forests, integrate automated species classification, and support real-time carbon monitoring platforms. As the world advances toward net-zero goals, such intelligent, scalable mapping tools could play a central role in achieving sustainable forestry and climate-change mitigation.

###

References

DOI

10.34133/remotesensing.0880

Original Source URL

https://spj.science.org/doi/10.34133/remotesensing.0880

Funding Information

This study is supported by the National Science Foundation of China (72140005) and the Natural Science Foundation of Beijing, China (grant no. 3252016) and partly by BBSRC (BB/R019983/1, BB/S020969/), EPSRC (EP/X013707/1), and the Key Research and Development Program of Shaanxi Province (program no. 2024NC-YBXM-220).

About Journal of Remote Sensing

The Journal of Remote Sensing, an online-only Open Access journal published in association with AIR-CAS, promotes the theory, science, and technology of remote sensing, as well as interdisciplinary research within earth and information science.

Paper title: A Novel Large Vision Foundation Model-Based Approach for Generating High-Resolution Canopy Height Maps in Plantations for Precision Forestry Management
Attached files
  • RGB imagery of the study area. The position of 3 plots with lidar reference is marked by yellow rectangle. The center of 1,436 plantations with species and age labels is represented by red dots.
22/12/2025 TranSpread
Regions: North America, United States, Europe, United Kingdom, Asia, China
Keywords: Science, Physics

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Testimonials

For well over a decade, in my capacity as a researcher, broadcaster, and producer, I have relied heavily on Alphagalileo.
All of my work trips have been planned around stories that I've found on this site.
The under embargo section allows us to plan ahead and the news releases enable us to find key experts.
Going through the tailored daily updates is the best way to start the day. It's such a critical service for me and many of my colleagues.
Koula Bouloukos, Senior manager, Editorial & Production Underknown
We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet

We Work Closely With...


  • e
  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2025 by AlphaGalileo Terms Of Use Privacy Statement