skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Growth Stage Classification and Harvest Scheduling of Snap Bean Using Hyperspectral Sensing: A Greenhouse Study
The agricultural industry suffers from a significant amount of food waste, some of which originates from an inability to apply site-specific management at the farm-level. Snap bean, a broad-acre crop that covers hundreds of thousands of acres across the USA, is not exempt from this need for informed, within-field, and spatially-explicit management approaches. This study aimed to assess the utility of machine learning algorithms for growth stage and pod maturity classification of snap bean (cv. Huntington), as well as detecting and discriminating spectral and biophysical features that lead to accurate classification results. Four major growth stages and six main sieve size pod maturity levels were evaluated for growth stage and pod maturity classification, respectively. A point-based in situ spectroradiometer in the visible-near-infrared and shortwave-infrared domains (VNIR-SWIR; 400–2500 nm) was used and the radiance values were converted to reflectance to normalize for any illumination change between samples. After preprocessing the raw data, we approached pod maturity assessment with multi-class classification and growth stage determination with binary and multi-class classification methods. Results from the growth stage assessment via the binary method exhibited accuracies ranging from 90–98%, with the best mathematical enhancement method being the continuum-removal approach. The growth stage multi-class classification method used raw reflectance data and identified a pair of wavelengths, 493 nm and 640 nm, in two basic transforms (ratio and normalized difference), yielding high accuracies (~79%). Pod maturity assessment detected narrow-band wavelengths in the VIS and SWIR region, separating between not ready-to-harvest and ready-to-harvest scenarios with classification measures at the ~78% level by using continuum-removed spectra. Our work is a best-case scenario, i.e., we consider it a stepping-stone to understanding snap bean harvest maturity assessment via hyperspectral sensing at a scalable level (i.e., airborne systems). Future work involves transferring the concepts to unmanned aerial system (UAS) field experiments and validating whether or not a simple multispectral camera, mounted on a UAS, could incorporate < 10 spectral bands to meet the need of both growth stage and pod maturity classification in snap bean production.  more » « less
Award ID(s):
1827551
PAR ID:
10290131
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Remote Sensing
Volume:
12
Issue:
22
ISSN:
2072-4292
Page Range / eLocation ID:
3809
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Accurate, precise, and timely estimation of crop yield is key to a grower’s ability to proactively manage crop growth and predict harvest logistics. Such yield predictions typically are based on multi-parametric models and in-situ sampling. Here we investigate the extension of a greenhouse study, to low-altitude unmanned aerial systems (UAS). Our principal objective was to investigate snap bean crop (Phaseolus vulgaris) yield using imaging spectroscopy (hyperspectral imaging) in the visible to near-infrared (VNIR; 400–1000 nm) region via UAS. We aimed to solve the problem of crop yield modelling by identifying spectral features explaining yield and evaluating the best time period for accurate yield prediction, early in time. We introduced a Python library, named Jostar, for spectral feature selection. Embedded in Jostar, we proposed a new ranking method for selected features that reaches an agreement between multiple optimization models. Moreover, we implemented a well-known denoising algorithm for the spectral data used in this study. This study benefited from two years of remotely sensed data, captured at multiple instances over the summers of 2019 and 2020, with 24 plots and 18 plots, respectively. Two harvest stage models, early and late harvest, were assessed at two different locations in upstate New York, USA. Six varieties of snap bean were quantified using two components of yield, pod weight and seed length. We used two different vegetation detection algorithms. the Red-Edge Normalized Difference Vegetation Index (RENDVI) and Spectral Angle Mapper (SAM), to subset the fields into vegetation vs. non-vegetation pixels. Partial least squares regression (PLSR) was used as the regression model. Among nine different optimization models embedded in Jostar, we selected the Genetic Algorithm (GA), Ant Colony Optimization (ACO), Simulated Annealing (SA), and Particle Swarm Optimization (PSO) and their resulting joint ranking. The findings show that pod weight can be explained with a high coefficient of determination (R2 = 0.78–0.93) and low root-mean-square error (RMSE = 940–1369 kg/ha) for two years of data. Seed length yield assessment resulted in higher accuracies (R2 = 0.83–0.98) and lower errors (RMSE = 4.245–6.018 mm). Among optimization models used, ACO and SA outperformed others and the SAM vegetation detection approach showed improved results when compared to the RENDVI approach when dense canopies were being examined. Wavelengths at 450, 500, 520, 650, 700, and 760 nm, were identified in almost all data sets and harvest stage models used. The period between 44–55 days after planting (DAP) the optimal time period for yield assessment. Future work should involve transferring the learned concepts to a multispectral system, for eventual operational use; further attention should also be paid to seed length as a ground truth data collection technique, since this yield indicator is far more rapid and straightforward. 
    more » « less
  2. Farmers and growers typically use approaches based on the crop environment and local meteorology, many of which are labor-intensive, to predict crop yield. These approaches have found broad acceptance but lack real-time and physiological feedback for near-daily management purposes. This is true for broad-acre crops, such as snap bean, which is valued at hundreds of millions of dollars in the annual agricultural market. We aim to investigate the relationships between snap bean yield and plant spectral and biophysical information, collected using a hyperspectral spectroradiometer (400 to 2500 nm). The experiment focused on 48 single snap bean plants (cv. Huntington) in a controlled greenhouse environment during the growth period (69 days). We used applicable accuracy and precision metrics from partial least squares regression and cross-validation methods to evaluate the predictive ability of two harvest stages, namely an early-harvest and late-harvest stage, against our yield indicator (bean pod weight). Four different spectral data sets were used to investigate whether such oversampled, hyperspectral data sets could accurately and precisely model observed variability in yield, in terms of the coefficient of determination (R2) and root-mean-square error (RMSE). The objective of our approach hinges on the philosophy that selected spectral bands from this study, i.e., those that best explain yield variability, can be downsampled from a hyperspectral system for use in a more cost-effective, operational multispectral sensor. Our results suggested the optimal period for spectral evaluation of snap bean yield is 20 to 25 or 32 days prior to harvest for the early- and late-harvest stages, respectively, with the best model performing at a low RMSE (3.02 g/plant) and a high coefficient of determination (R2 = 0.72). An unmanned aerial systems-mounted, affordable, and wavelength-programmable multispectral imager, with bands corresponding to those identified, could provide a near real-time and reliable yield estimate prior to harvest. 
    more » « less
  3. Abstract The advent of remote sensing from unmanned aerial systems (UAS) has opened the door to more affordable and effective methods of imaging and mapping of surface geophysical properties with many important applications in areas such as coastal zone management, ecology, agriculture, and defense. We describe a study to validate and improve soil moisture content retrieval and mapping from hyperspectral imagery collected by a UAS system. Our approach uses a recently developed model known as the multilayer radiative transfer model of soil reflectance (MARMIT). MARMIT partitions contributions due to water and the sediment surface into equivalent but separate layers and describes these layers using an equivalent slab model formalism. The model water layer thickness along with the fraction of wet surface become parameters that must be optimized in a calibration step, with extinction due to water absorption being applied in the model based on equivalent water layer thickness, while transmission and reflection coefficients follow the Fresnel formalism. In this work, we evaluate the model in both field settings, using UAS hyperspectral imagery, and laboratory settings, using hyperspectral spectra obtained with a goniometer. Sediment samples obtained from four different field sites representing disparate environmental settings comprised the laboratory analysis while field validation used hyperspectral UAS imagery and coordinated ground truth obtained on a barrier island shore during field campaigns in 2018 and 2019. Analysis of the most significant wavelengths for retrieval indicate a number of different wavelengths in the short-wave infra-red (SWIR) that provide accurate fits to measured soil moisture content in the laboratory with normalized root mean square error (NRMSE)< 0.145, while independent evaluation from sequestered test data from the hyperspectral UAS imagery obtained during the field campaign obtained an average NRMSE = 0.169 and median NRMSE = 0.152 in a bootstrap analysis. 
    more » « less
  4. In vivo fluorescence imaging in the shortwave infrared (SWIR, 1,000–1,700 nm) and extended SWIR (ESWIR, 1,700–2,700 nm) regions has tremendous potential for diagnostic imaging. Although image contrast has been shown to improve as longer wavelengths are accessed, the design and synthesis of organic fluorophores that emit in these regions is challenging. Here we synthesize a series of silicon-RosIndolizine (SiRos) fluorophores that exhibit peak emission wavelengths from 1,300–1,700 nm and emission onsets of 1,800–2,200 nm. We characterize the fluorophores photophysically (both steady-state and time- resolved), electrochemically and computationally using time-dependent density functional theory. Using two of the fluorophores (SiRos1300 and SiRos1550), we formulate nanoemulsions and use them for general systemic circulatory SWIR fluorescence imaging of the cardiovascular system in mice. These studies resulted in high-resolution SWIR images with well-defined vasculature visible throughout the entire circulatory system. This SiRos scaffold establishes design principles for generating long-wavelength emitting SWIR and ESWIR fluorophores. 
    more » « less
  5. Objectives: This paper introduces a novel method for the detection and classification of aortic stenosis (AS) using the time-frequency features of chest cardio-mechanical signals collected from wearable sensors, namely seismo-cardiogram (SCG) and gyro-cardiogram (GCG) signals. Such a method could potentially monitor high-risk patients out of the clinic. Methods: Experimental measurements were collected from twenty patients with AS and twenty healthy subjects. Firstly, a digital signal processing framework is proposed to extract time-frequency features. The features are then selected via the analysis of variance test. Different combinations of features are evaluated using the decision tree, random forest, and artificial neural network methods. Two classification tasks are conducted. The first task is a binary classification between normal subjects and AS patients. The second task is a multi-class classification of AS patients with co-existing valvular heart diseases. Results: In the binary classification task, the average accuracies achieved are 96.25% from decision tree, 97.43% from random forest, and 95.56% from neural network. The best performance is from combined SCG and GCG features with random forest classifier. In the multi-class classification, the best performance is 92.99% using the random forest classifier and SCG features. Conclusion: The results suggest that the solution could be a feasible method for classifying aortic stenosis, both in the binary and multi-class tasks. It also indicates that most of the important time-frequency features are below 11 Hz. Significance: The proposed method shows great potential to provide continuous monitoring of valvular heart diseases to prevent patients from sudden critical cardiac situations. 
    more » « less