Reliable seed yield estimation is an indispensable step in plant breeding programs geared towards cultivar development in major row crops. The objective of this study is to develop a machine learning (ML) approach adept at soybean ( Glycine max L. (Merr.)) pod counting to enable genotype seed yield rank prediction from in-field video data collected by a ground robot. To meet this goal, we developed a multiview image-based yield estimation framework utilizing deep learning architectures. Plant images captured from different angles were fused to estimate the yield and subsequently to rank soybean genotypes for application in breeding decisions. We used data from controlled imaging environment in field, as well as from plant breeding test plots in field to demonstrate the efficacy of our framework via comparing performance with manual pod counting and yield estimation. Our results demonstrate the promise of ML models in making breeding decisions with significant reduction of time and human effort and opening new breeding method avenues to develop cultivars.
more »
« less
This content will become publicly available on October 1, 2026
NeRF-LAI: A hybrid method combining neural radiance field and gap-fraction theory for deriving effective leaf area index of corn and soybean using multi-angle UAV images
Methods based on upward canopy gap fractions are widely employed to measure in-situ effective LAI (Le) as an alternative to destructive sampling. However, these measurements are limited to point-level and are not practical for scaling up to larger areas. To address the point-to-landscape gap, this study introduces an innovative approach, named NeRF-LAI, for corn and soybean Le estimation that combines gap-fraction theory with the neural radiance field (NeRF) technology, an emerging neural network-based method for implicitly representing 3D scenes using multi-angle 2D images. The trained NeRF-LAI can render downward photorealistic hemispherical depth images from an arbitrary viewpoint in the 3D scene, and then calculate gap fractions to estimate Le. To investigate the intrinsic difference between upward and downward gaps estimations, initial tests on virtual corn fields demonstrated that the downward Le matches well with the upward Le, and the viewpoint height is insensitive to Le estimation for a homogeneous field. Furthermore, we conducted intensive real-world experiments at controlled plots and farmer-managed fields to test the effectiveness and transferability of NeRF-LAI in real-world scenarios, where multi-angle UAV oblique images from different phenological stages were collected for corn and soybeans. Results showed the NeRF-LAI is able to render photorealistic synthetic images with an average peak signal-to-noise ratio (PSNR) of 18.94 for the controlled corn plots and 19.10 for the controlled soybean plots. We further explored three methods to estimate Le from calculated gap fractions: the 57.5° method, the five-ring-based method, and the cell-based method. Among these, the cell-based method achieved the best performance, with the r2 ranging from 0.674 to 0.780 and RRMSE ranging from 1.95 % to 5.58 %. The Le estimates are sensitive to viewpoint height in heterogeneous fields due to the difference in the observable foliage volume, but they exhibit less sensitivity to relatively homogeneous fields. Additionally, the cross-site testing for pixel-level LAI mapping showed the NeRF-LAI significantly outperforms the VI-based models, with a small variation of RMSE (0.71 to 0.95 m2/m2) for spatial resolution from 0.5 m to 2.0 m. This study extends the application of gap fraction-based Le estimation from a discrete point scale to a continuous field scale by leveraging implicit 3D neural representations learned by NeRF. The NeRF-LAI method can map Le from raw multi-angle 2D images without prior information, offering a potential alternative to the traditional in-situ plant canopy analyzer with a more flexible and efficient solution.
more »
« less
- PAR ID:
- 10621097
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Remote Sensing of Environment
- Volume:
- 328
- Issue:
- C
- ISSN:
- 0034-4257
- Page Range / eLocation ID:
- 114844
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Many vision‐based indoor localization methods require tedious and comprehensive pre‐mapping of built environments. This research proposes a mapping‐free approach to estimating indoor camera poses based on a 3D style‐transferred building information model (BIM) and photogrammetry technique. To address the cross‐domain gap between virtual 3D models and real‐life photographs, a CycleGAN model was developed to transform BIM renderings into photorealistic images. A photogrammetry‐based algorithm was developed to estimate camera pose using the visual and spatial information extracted from the style‐transferred BIM. The experiments demonstrated the efficacy of CycleGAN in bridging the cross‐domain gap, which significantly improved performance in terms of image retrieval and feature correspondence detection. With the 3D coordinates retrieved from BIM, the proposed method can achieve near real‐time camera pose estimation with an accuracy of 1.38 m and 10.1° in indoor environments.more » « less
-
Measurements from the Event Horizon Telescope enabled the visualization of light emission around a black hole for the first time. So far, these measurements have been used to recover a 2D image under the assumption that the emission field is static over the period of acquisition. In this work, we propose BH-NeRF, a novel tomography approach that leverages gravitational lensing to recover the continuous 3D emission field near a black hole. Compared to other 3D reconstruction or tomography settings, this task poses two significant challenges: first, rays near black holes follow curved paths dictated by general relativity, and second, we only observe measurements from a single viewpoint. Our method captures the unknown emission field using a continuous volumetric function parameterized by a coordinate-based neural network, and uses knowledge of Keplerian orbital dynamics to establish correspondence between 3D points over time. Together, these enable BH-NeRF to recover accurate 3D emission fields, even in challenging situations with sparse measurements and uncertain orbital dynamics. This work takes the first steps in showing how future measurements from the Event Horizon Telescope could be used to recover evolving 3D emission around the supermassive black hole in our Galactic center.more » « less
-
Freshwater mussels are essential parts of our ecosystems to reduce water pollution. As natural bio-filters, they deactivate pollutants such as heavy metals, providing a sustainable method for water decontamination. This project will enable the use of Artificial Intelligence (AI) to monitor mussel behavior, particularly their gaping activity, to use them as bio-indicators for early detection of water contamination. In this paper, we employ advanced 3D reconstruction techniques to create detailed models of mussels to improve the accuracy of AI-based analysis. Specifically, we use a state-of-the-art 3D reconstruction tool, Neural Radiance Fields (NeRF), to create 3D models of mussel valve configurations and behavioral patterns. NeRF enables 3D reconstruction of scenes and objects from a sparse set of 2D images. To capture these images, we developed a data collection system capable of imaging mussels from multiple viewpoints. The system featured a turntable made of foam board with markers around the edges and a designated space in the center for mounting the mussels. The turntable was attached to a servo motor controlled by an ESP32 microcontroller. It rotated in a few degree increments, with the ESP32 camera capturing an image at each step. The images, along with degree information and timestamps, are stored on a Secure Digital (SD) memory card. Several components, such as the camera holder and turntable base, are 3D printed. These images are used to train a NeRF model using the Python-based Nerfstudio framework, and the resulting 3D models were viewed via the Nerfstudio API. The setup was designed to be user-friendly, making it easy for educational outreach engagements and to involve secondary education by replicating and operating 3D reconstructions of their chosen objects. We validated the accessibility and the impact of this platform in a STEM education summer program. A team of high school students from the Juntos Summer Academy at NC State University worked on this platform, gaining hands-on experience in embedded hardware development, basic machine learning principles, and 3D reconstruction from 2D images. We also report on their feedback on the activity.more » « less
-
We present a novel method for soybean [Glycine max(L.) Merr.] yield estimation leveraging high-throughput seed counting via computer vision and deep learning techniques. Traditional methods for collecting yield data are labor-intensive, costly, and prone to equipment failures at critical data collection times and require transportation of equipment across field sites. Computer vision, the field of teaching computers to interpret visual data, allows us to extract detailed yield information directly from images. By treating it as a computer vision task, we report a more efficient alternative, employing a ground robot equipped with fisheye cameras to capture comprehensive videos of soybean plots from which images are extracted in a variety of development programs. These images are processed through the P2PNet-Yield model, a deep learning framework, where we combined a feature extraction module (the backbone of the P2PNet-Soy) and a yield regression module to estimate seed yields of soybean plots. Our results are built on 2 years of yield testing plot data—8,500 plots in 2021 and 650 plots in 2023. With these datasets, our approach incorporates several innovations to further improve the accuracy and generalizability of the seed counting and yield estimation architecture, such as the fisheye image correction and data augmentation with random sensor effects. The P2PNet-Yield model achieved a genotype ranking accuracy score of up to 83%. It demonstrates up to a 32% reduction in time to collect yield data as well as costs associated with traditional yield estimation, offering a scalable solution for breeding programs and agricultural productivity enhancement.more » « less
An official website of the United States government
