Differences in canopy architecture play a role in determining both the light and water use efficiency. Canopy architecture is determined by several component traits, including leaf length, width, number, angle, and phyllotaxy. Phyllotaxy may be among the most difficult of the leaf canopy traits to measure accurately across large numbers of individual plants. As a result, in simulations of the leaf canopies of grain crops such as maize and sorghum, this trait is frequently approximated as alternating 180 angles between sequential leaves. We explore the feasibility of extracting direct measurements of the phyllotaxy of sequential leaves from 3D reconstructions of individual sorghum plants generated from 2D calibrated images and test the assumption of consistently alternating phyllotaxy across a diverse set of sorghum genotypes. Using a voxel-carving-based approach, we generate 3D reconstructions from multiple calibrated 2D images of 366 sorghum plants representing 236 sorghum genotypes from the sorghum association panel. The correlation between automated and manual measurements of phyllotaxy is only modestly lower than the correlation between manual measurements of phyllotaxy generated by two different individuals. Automated phyllotaxy measurements exhibited a repeatability of R2 ¼ 0.41 across imaging timepoints separated by a period of two days. A resampling based genome wide association study (GWAS) identified several putative genetic associations with lower-canopy phyllotaxy in sorghum. This study demonstrates the potential of 3D reconstruction to enable both quantitative genetic investigation and breeding for phyllotaxy in sorghum and other grain crops with similar lant architectures.
more »
« less
Field‐based robotic leaf angle detection and characterization of maize plants using stereo vision and deep convolutional neural networks
Maize (Zea mays L.) is one of the three major cereal crops in the world. Leaf angle is an important architectural trait of crops due to its substantial role in light interception by the canopy and hence photosynthetic efficiency. Traditionally, leaf angle has been measured using a protractor, a process that is both slow and laborious. Efficiently measuring leaf angle under field conditions via imaging is challenging due to leaf density in the canopy and the resulting occlusions. However, advances in imaging technologies and machine learning have provided new tools for image acquisition and analysis that could be used to characterize leaf angle using three-dimensional (3D) models of field-grown plants. In this study, PhenoBot 3.0, a robotic vehicle designed to traverse between pairs of agronomically spaced rows of crops, was equipped with multiple tiers of PhenoStereo cameras to capture side-view images of maize plants in the field. PhenoStereo is a customized stereo camera module with integrated strobe lighting for high-speed stereoscopic image acquisition under variable outdoor lighting conditions. An automated image processing pipeline (AngleNet) was developed to measure leaf angles of nonoccluded leaves. In this pipeline, a novel representation form of leaf angle as a triplet of keypoints was proposed. The pipeline employs convolutional neural networks to detect each leaf angle in two-dimensional images and 3D modeling approaches to extract quantitative data from reconstructed models. Our study demonstrates the feasibility of using stereo vision to investigate the distribution of leaf angles in maize under field conditions. The proposed system is an efficient alternative to traditional leaf angle phenotyping and thus could accelerate breeding for improved plant architecture.
more »
« less
- Award ID(s):
- 2210259
- PAR ID:
- 10427901
- Date Published:
- Journal Name:
- Journal of Field Robotics
- ISSN:
- 1556-4959
- Page Range / eLocation ID:
- 1-20
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Stimulated Raman projection tomography is a label-free volumetric chemical imaging technology allowing three-dimensional (3D) reconstruction of chemical distribution in a biological sample from the angle-dependent stimulated Raman scattering projection images. However, the projection image acquisition process requires rotating the sample contained in a capillary glass held by a complicated sample rotation stage, limiting the volumetric imaging speed, and inhibiting the study of living samples. Here, we report a tilt-angle stimulated Raman projection tomography (TSPRT) system which acquires angle-dependent projection images by utilizing tilt-angle beams to image the sample from different azimuth angles sequentially. The TSRPT system, which is free of sample rotation, enables rapid scanning of different views by a tailor-designed four-galvo-mirror scanning system. We present the design of the optical system, the theory, and calibration procedure for chemical tomographic reconstruction. 3D vibrational images of polystyrene beads and C. elegans are demonstrated in the C-H vibrational region.more » « less
-
Bucksch, Alexander Clarke (Ed.)Understanding root traits is essential to improve water uptake, increase nitrogen capture and accelerate carbon sequestration from the atmosphere. High-throughput phenotyping to quantify root traits for deeper field-grown roots remains a challenge, however. Recently developed open-source methods use 3D reconstruction algorithms to build 3D models of plant roots from multiple 2D images and can extract root traits and phenotypes. Most of these methods rely on automated image orientation (Structure from Motion)[1] and dense image matching (Multiple View Stereo) algorithms to produce a 3D point cloud or mesh model from 2D images. Until now the performance of these methods when applied to field-grown roots has not been compared tested commonly used open-source pipelines on a test panel of twelve contrasting maize genotypes grown in real field conditions[2-6]. We compare the 3D point clouds produced in terms of number of points, computation time and model surface density. This comparison study provides insight into the performance of different open-source pipelines for maize root phenotyping and illuminates trade-offs between 3D model quality and performance cost for future high-throughput 3D root phenotyping. DOI recognition was not working: https://doi.org/10.1002/essoar.10508794.2more » « less
-
Abstract Optimizing leaf angle and other canopy architecture traits has helped modern maize (Zea maysL.) become adapted to higher planting densities over the last 60 years. Traditional investigations into genetic control of leaf angle have focused on one leaf or the average of multiple leaves; as a result, our understanding of genetic control across multiple canopy levels is still limited. To address this, genetic mapping across four canopy levels was conducted in the present study to investigate the genetic control of leaf angle across the canopy. We developed two populations of doubled haploid lines derived from three inbreds with distinct leaf angle phenotypes. These populations were genotyped with genotyping‐by‐sequencing and phenotyped for leaf angle at four different canopy levels over multiple years. To understand how leaf angle changes across the canopy, the four measurements were used to derive three additional traits. Composite interval mapping was conducted with the leaf‐specific measurements and the derived traits. A set of 59 quantitative trait loci (QTLs) were uncovered for seven traits, and two genomic regions were consistently detected across multiple canopy levels. Additionally, seven genomic regions were found to contain consistent QTLs with either relatively stable or dynamic effects at different canopy levels. Prioritizing the selection of QTLs with dynamic effects across the canopy will aid breeders in selecting maize hybrids with the ideal canopy architecture that continues to maximize yield on a per area basis under increasing planting densities.more » « less
-
Automated canopy stress classification for field crops has traditionally relied on single-perspective, two-dimensional (2D) photographs, usually obtained through top-view imaging using unmanned aerial vehicles (UAVs). However, this approach may fail to capture the full extent of plant stress symptoms, which can manifest throughout the canopy. Recent advancements in LiDAR technologies have enabled the acquisition of high-resolution 3D point cloud data for the entire canopy, offering new possibilities for more accurate plant stress identification and rating. This study explores the potential of leveraging 3D point cloud data for improved plant stress assessment. We utilized a dataset of RGB 3D point clouds of 700 soybean plants from a diversity panel exposed to iron deficiency chlorosis (IDC) stress. From this unique set of 700 canopies exhibiting varying levels of IDC, we extracted several representations, including (a) handcrafted IDC symptom-specific features, (b) canopy fingerprints, and (c) latent feature-based features. Subsequently, we trained several classification models to predict plant stress severity using these representations. We exhaustively investigated several stress representations and model combinations for the 3-D data. We also compared the performance of these classification models against similar models that are only trained using the associated top-view 2D RGB image for each plant. Among the feature-model combinations tested, the 3D canopy fingerprint features trained with a support vector machine yielded the best performance, achieving higher classification accuracy than the best-performing model based on 2D data built using convolutional neural networks. Our findings demonstrate the utility of color canopy fingerprinting and underscore the importance of considering 3D data to assess plant stress in agricultural applications.more » « less
An official website of the United States government

