skip to main content

This content will become publicly available on November 10, 2022

Title: Complementary Phenotyping of Maize Root System Architecture by Root Pulling Force and X-Ray Imaging
The root system is critical for the survival of nearly all land plants and a key target for improving abiotic stress tolerance, nutrient accumulation, and yield in crop species. Although many methods of root phenotyping exist, within field studies, one of the most popular methods is the extraction and measurement of the upper portion of the root system, known as the root crown, followed by trait quantification based on manual measurements or 2D imaging. However, 2D techniques are inherently limited by the information available from single points of view. Here, we used X-ray computed tomography to generate highly accurate 3D models of maize root crowns and created computational pipelines capable of measuring 71 features from each sample. This approach improves estimates of the genetic contribution to root system architecture and is refined enough to detect various changes in global root system architecture over developmental time as well as more subtle changes in root distributions as a result of environmental differences. We demonstrate that root pulling force, a high-throughput method of root extraction that provides an estimate of root mass, is associated with multiple 3D traits from our pipeline. Our combined methodology can therefore be used to calibrate and interpret root more » pulling force measurements across a range of experimental contexts or scaled up as a stand-alone approach in large genetic studies of root system architecture. « less
Authors:
; ; ; ; ; ; ; ;
Award ID(s):
1638507
Publication Date:
NSF-PAR ID:
10301756
Journal Name:
Plant Phenomics
Volume:
2021
Page Range or eLocation-ID:
1 to 12
ISSN:
2643-6515
Sponsoring Org:
National Science Foundation
More Like this
  1. Bucksch, Alexander Clarke (Ed.)
    Understanding root traits is essential to improve water uptake, increase nitrogen capture and accelerate carbon sequestration from the atmosphere. High-throughput phenotyping to quantify root traits for deeper field-grown roots remains a challenge, however. Recently developed open-source methods use 3D reconstruction algorithms to build 3D models of plant roots from multiple 2D images and can extract root traits and phenotypes. Most of these methods rely on automated image orientation (Structure from Motion)[1] and dense image matching (Multiple View Stereo) algorithms to produce a 3D point cloud or mesh model from 2D images. Until now the performance of these methods when appliedmore »to field-grown roots has not been compared tested commonly used open-source pipelines on a test panel of twelve contrasting maize genotypes grown in real field conditions[2-6]. We compare the 3D point clouds produced in terms of number of points, computation time and model surface density. This comparison study provides insight into the performance of different open-source pipelines for maize root phenotyping and illuminates trade-offs between 3D model quality and performance cost for future high-throughput 3D root phenotyping. DOI recognition was not working: https://doi.org/10.1002/essoar.10508794.2« less
  2. Abstract. High-resolution remote sensing imagery has been increasingly used for flood applications. Different methods have been proposed for flood extent mapping from creating water index to image classification from high-resolution data. Among these methods, deep learning methods have shown promising results for flood extent extraction; however, these two-dimensional (2D) image classification methods cannot directly provide water level measurements. This paper presents an integrated approach to extract the flood extent in three-dimensional (3D) from UAV data by integrating 2D deep learning-based flood map and 3D cloud point extracted from a Structure from Motion (SFM) method. We fine-tuned a pretrained Visual Geometrymore »Group 16 (VGG-16) based fully convolutional model to create a 2D inundation map. The 2D classified map was overlaid on the SfM-based 3D point cloud to create a 3D flood map. The floodwater depth was estimated by subtracting a pre-flood Digital Elevation Model (DEM) from the SfM-based DEM. The results show that the proposed method is efficient in creating a 3D flood extent map to support emergency response and recovery activates during a flood event.« less
  3. Abstract State-of-the-Art models of Root System Architecture (RSA) do not allow simulating root growth around rigid obstacles. Yet, the presence of obstacles can be highly disruptive to the root system. We grew wheat seedlings in sealed petri dishes without obstacle and in custom 3D-printed rhizoboxes containing obstacles. Time-lapse photography was used to reconstruct the wheat root morphology network. We used the reconstructed wheat root network without obstacle to calibrate an RSA model implemented in the R-SWMS software. The root network with obstacles allowed calibrating the parameters of a new function that models the influence of rigid obstacles on wheat rootmore »growth. Experimental results show that the presence of a rigid obstacle does not affect the growth rate of the wheat root axes, but that it does influence the root trajectory after the main axis has passed the obstacle. The growth recovery time, i.e. the time for the main root axis to recover its geotropism-driven growth, is proportional to the time during which the main axis grows along the obstacle. Qualitative and quantitative comparisons between experimental and numerical results show that the proposed model successfully simulates wheat RSA growth around obstacles. Our results suggest that wheat roots follow patterns that could inspire the design of adaptive engineering flow networks.« less
  4. Abstract The development of crops with deeper roots holds substantial promise to mitigate the consequences of climate change. Deeper roots are an essential factor to improve water uptake as a way to enhance crop resilience to drought, to increase nitrogen capture, to reduce fertilizer inputs, and to increase carbon sequestration from the atmosphere to improve soil organic fertility. A major bottleneck to achieving these improvements is high-throughput phenotyping to quantify root phenotypes of field-grown roots. We address this bottleneck with Digital Imaging of Root Traits (DIRT)/3D, an image-based 3D root phenotyping platform, which measures 18 architecture traits from mature field-grownmore »maize (Zea mays) root crowns (RCs) excavated with the Shovelomics technique. DIRT/3D reliably computed all 18 traits, including distance between whorls and the number, angles, and diameters of nodal roots, on a test panel of 12 contrasting maize genotypes. The computed results were validated through comparison with manual measurements. Overall, we observed a coefficient of determination of r2>0.84 and a high broad-sense heritability of Hmean2> 0.6 for all but one trait. The average values of the 18 traits and a developed descriptor to characterize complete root architecture distinguished all genotypes. DIRT/3D is a step toward automated quantification of highly occluded maize RCs. Therefore, DIRT/3D supports breeders and root biologists in improving carbon sequestration and food security in the face of the adverse effects of climate change.« less
  5. An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulledmore »apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery.« less