By combining unmanned aerial system (UAS) imagery with the Segment Anything Model (SAM) for plot segmentation and convolutional neural networks (CNNs) for trait classification, the pipeline delivers reliable, plot-level estimates of canopy height (CH), growth habit (GH), and mainstem prominence (MP).
Peanut is a major crop for food and trade, requiring continuous breeding gains in yield, quality, and stress resistance. Yet progress is often constrained by field phenotyping, which is labor-intensive and difficult to scale. Architectural traits such as canopy height, growth habit, and mainstem prominence are especially important for canopy structure, adaptation, and mechanical harvestability. While drones accelerate data collection, image processing and plot segmentation often remain semi-manual, particularly in heterogeneous fields with overlapping canopies or misaligned rows. Foundation vision models like SAM offer more generalizable segmentation with minimal training data, but agricultural scenes still pose challenges from clutter and shadows.
A study (DOI: 10.1016/j.plaphe.2025.100126) published in Plant Phenomics on 27 October 2025 by Peggy Ozias-Akins’s team, University of Georgia, demonstrates that a fully automated, SAM- and deep learning–based UAS phenotyping pipeline can reliably replace labor-intensive field measurements while preserving the genetic signals needed for large-scale peanut breeding and QTL analysis.
This study presents a fully automated high-throughput phenotyping (HTP) pipeline that integrates UAS-derived orthomosaics, the Segment Anything Model (SAM), digital surface modeling, and convolutional neural networks (CNNs) to extract plot-level architectural traits in peanut breeding trials. The workflow begins with SAM auto-mask generation to identify experimental field boundaries and estimate field orientation without manual input, followed by metric-based post-processing using stability scores and predicted IoU to select optimal masks. Individual plots are then delineated using SAM’s interactive mode with automatically generated multi-point prompts derived from temporary plot centroids, eliminating the need for hand-drawn plot polygons and enabling scalable plot extraction in heterogeneous fields. For canopy height (CH) estimation, local terrain points adjacent to each plot are sampled to reconstruct a plot-specific digital terrain model and normalized digital surface model (nDSM), allowing plant height to be isolated from terrain variability caused by furrows, wheel tracks, or raised beds. Plot-level CH estimates showed strong agreement with manual measurements (R² ≈ 0.78, RMSE < 3 cm, MAPE ≈ 10%), confirming effective terrain normalization. Growth habit (GH) and mainstem prominence (MP) were classified using pretrained CNNs (AlexNet, ResNet18, and EfficientNet-B0) trained on RGB imagery, nDSM data, or their combination. For GH, AlexNet achieved the highest accuracy (88.5%) when combining RGB and nDSM inputs, with balanced recall for Bunch and Spreading types, highlighting the value of integrating structural and visual information. In contrast, MP estimation relied primarily on structural cues, with nDSM-only models reaching accuracies of about 83%, while adding RGB provided little benefit. Automated field identification achieved IoU values mostly above 0.95 and orientation errors below 0.8°, and plot segmentation reached a specificity of 0.99, sensitivity of 0.87, and a Dice coefficient of 0.92, at roughly 2 seconds per plot. Finally, QTL analyses using HTP-derived GH and MP identified the same major loci on chromosome Arahy.15 as conventional phenotyping, demonstrating that the automated pipeline delivers accurate, efficient, and genetically informative phenotypes suitable for large-scale breeding applications.
A pipeline like this can help breeding teams phenotype large trials more frequently and consistently, enabling faster selection cycles and stronger datasets for QTL mapping, GWAS, and genomic selection. Automating plot segmentation and trait extraction can also improve reproducibility across years and locations, since the same rules are applied uniformly instead of relying on manual delineation or scorer-dependent ratings.
###
References
DOI
10.1016/j.plaphe.2025.100126
Original Source URl
https://doi.org/10.1016/j.plaphe.2025.100126
Funding information
This work was supported by the United States Department of Agriculture National Institute of Food and Agriculture [Award Number 2022-67013-37365].
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.