skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.

Title: Detecting intra-field variation in rice yield with UAV imagery and deep learning
Unmanned aerial vehicles (UAVs) equipped with multispectral sensors offer high spatial and temporal resolution imagery for monitoring crop stress at early stages of development. Analysis of UAV-derived data with advanced machine learning models could improve real-time management in agricultural systems, but guidance for this integration is currently limited. Here we compare two deep learning-based strategies for early warning detection of crop stress, using multitemporal imagery throughout the growing season to predict field-scale yield in irrigated rice in eastern Arkansas. Both deep learning strategies showed improvements upon traditional statistical learning approaches including linear regression and gradient boosted decision trees. First, we explicitly accounted for variation across developmental stages using a 3D convolutional neural network (CNN) architecture that captures both spatial and temporal dimensions of UAV images from multiple time points throughout one growing season. 3D-CNNs achieved low prediction error on the test set, with a Root Mean Squared Error (RMSE) of 8.8% of the mean yield. For the second strategy, a 2D-CNN, we considered only spatial relationships among pixels for image features acquired during a single flyover. 2D-CNNs trained on images from a single day were most accurate when images were taken during booting stage or later, with RMSE ranging from 7.4 to 8.2% of the mean yield. A primary benefit of convolutional autoencoder-like models (based on analyses of prediction maps and feature importance) is the spatial denoising effect that corrects yield predictions for individual pixels based on the values of vegetation index and thermal features for nearby pixels. Our results highlight the promise of convolutional autoencoders for UAV-based yield prediction in rice.  more » « less
Award ID(s):
1752083 1723529 2054737
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Frontiers in plant science
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The food production system is vulnerable to diseases more than ever, and the threat is increasing in an era of climate change that creates more favorable conditions for emerging diseases. Fortunately, scientists and engineers are making great strides to introduce farming innovations to tackle the challenge. Unmanned aerial vehicle (UAV) remote sensing is among the innovations and thus is widely applied for crop health monitoring and phenotyping. This study demonstrated the versatility of aerial remote sensing in diagnosing yellow rust infection in spring wheats in a timely manner and determining an intervenable period to prevent yield loss. A small UAV equipped with an aerial multispectral sensor periodically flew over, and collected remotely sensed images of, an experimental field in Chacabuco (−34.64; −60.46), Argentina during the 2021 growing season. Post-collection images at the plot level were engaged in a thorough feature-engineering process by handcrafting disease-centric vegetation indices (VIs) from the spectral dimension, and grey-level co-occurrence matrix (GLCM) texture features from the spatial dimension. A machine learning pipeline entailing a support vector machine (SVM), random forest (RF), and multilayer perceptron (MLP) was constructed to identify locations of healthy, mild infection, and severe infection plots in the field. A custom 3-dimensional convolutional neural network (3D-CNN) relying on the feature learning mechanism was an alternative prediction method. The study found red-edge (690–740 nm) and near infrared (NIR) (740–1000 nm) as vital spectral bands for distinguishing healthy and severely infected wheats. The carotenoid reflectance index 2 (CRI2), soil-adjusted vegetation index 2 (SAVI2), and GLCM contrast texture at an optimal distance d = 5 and angular direction θ = 135° were the most correlated features. The 3D-CNN-based wheat disease monitoring performed at 60% detection accuracy as early as 40 days after sowing (DAS), when crops were tillering, increasing to 71% and 77% at the later booting and flowering stages (100–120 DAS), and reaching a peak accuracy of 79% for the spectral-spatio-temporal fused data model. The success of early disease diagnosis from low-cost multispectral UAVs not only shed new light on crop breeding and pathology but also aided crop growers by informing them of a prevention period that could potentially preserve 3–7% of the yield at the confidence level of 95%. 
    more » « less
  2. Abstract Understanding the interactions among agricultural processes, soil, and plants is necessary for optimizing crop yield and productivity. This study focuses on developing effective monitoring and analysis methodologies that estimate key soil and plant properties. These methodologies include data acquisition and processing approaches that use unmanned aerial vehicles (UAVs) and surface geophysical techniques. In particular, we applied these approaches to a soybean farm in Arkansas to characterize the soil–plant coupled spatial and temporal heterogeneity, as well as to identify key environmental factors that influence plant growth and yield. UAV-based multitemporal acquisition of high-resolution RGB (red–green–blue) imagery and direct measurements were used to monitor plant height and photosynthetic activity. We present an algorithm that efficiently exploits the high-resolution UAV images to estimate plant spatial abundance and plant vigor throughout the growing season. Such plant characterization is extremely important for the identification of anomalous areas, providing easily interpretable information that can be used to guide near-real-time farming decisions. Additionally, high-resolution multitemporal surface geophysical measurements of apparent soil electrical conductivity were used to estimate the spatial heterogeneity of soil texture. By integrating the multiscale multitype soil and plant datasets, we identified the spatiotemporal co-variance between soil properties and plant development and yield. Our novel approach for early season monitoring of plant spatial abundance identified areas of low productivity controlled by soil clay content, while temporal analysis of geophysical data showed the impact of soil moisture and irrigation practice (controlled by topography) on plant dynamics. Our study demonstrates the effective coupling of UAV data products with geophysical data to extract critical information for farm management. 
    more » « less
  3. Abstract

    Accurate and timely precipitation estimates are critical for monitoring and forecasting natural disasters such as floods. Despite having high-resolution satellite information, precipitation estimation from remotely sensed data still suffers from methodological limitations. State-of-the-art deep learning algorithms, renowned for their skill in learning accurate patterns within large and complex datasets, appear well suited to the task of precipitation estimation, given the ample amount of high-resolution satellite data. In this study, the effectiveness of applying convolutional neural networks (CNNs) together with the infrared (IR) and water vapor (WV) channels from geostationary satellites for estimating precipitation rate is explored. The proposed model performances are evaluated during summer 2012 and 2013 over central CONUS at the spatial resolution of 0.08° and at an hourly time scale. Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN)–Cloud Classification System (CCS), which is an operational satellite-based product, and PERSIANN–Stacked Denoising Autoencoder (PERSIANN-SDAE) are employed as baseline models. Results demonstrate that the proposed model (PERSIANN-CNN) provides more accurate rainfall estimates compared to the baseline models at various temporal and spatial scales. Specifically, PERSIANN-CNN outperforms PERSIANN-CCS (and PERSIANN-SDAE) by 54% (and 23%) in the critical success index (CSI), demonstrating the detection skills of the model. Furthermore, the root-mean-square error (RMSE) of the rainfall estimates with respect to the National Centers for Environmental Prediction (NCEP) Stage IV gauge–radar data, for PERSIANN-CNN was lower than that of PERSIANN-CCS (PERSIANN-SDAE) by 37% (14%), showing the estimation accuracy of the proposed model.

    more » « less
  4. Inundation mapping is a critical task for damage assessment, emergency management, and prioritizing relief efforts during a flooding event. Remote sensing has been an effective tool for interpreting and analyzing water bodies and detecting floods over the past decades. In recent years, deep learning algorithms such as convolutional neural networks (CNNs) have demonstrated promising performance for remote sensing image classification for many applications, including inundation mapping. Unlike conventional algorithms, deep learning can learn features automatically from large datasets. This research aims to compare and investigate the performance of two state-of-the-art methods for 3D inundation mapping: a deep learning-based image analysis and a Geomorphic Flood Index (GFI). The first method, deep learning image analysis involves three steps: 1) image classification to delineate flood boundaries, 2) integrate the flood boundaries and topography data to create a three-dimensional (3D) water surface, and 3) compare the 3D water surface with pre-flood topography to estimate floodwater depth. The second method, i.e., GFI, involves three phases: 1) calculate a river basin morphological information, such as river height (hr) and elevation difference (H), 2) calibrate and measure GFI to delineate flood boundaries, and 3) calculate the coefficient parameter ( α ), and correct the value of hr to estimate inundation depth. The methods were implemented to generate 3D inundation maps over Princeville, North Carolina, United States during hurricane Matthew in 2016. The deep learning method demonstrated better performance with a root mean square error (RMSE) of 0.26 m for water depth. It also achieved about 98% in delineating the flood boundaries using UAV imagery. This approach is efficient in extracting and creating a 3D flood extent map at a different scale to support emergency response and recovery activities during a flood event. 
    more » « less
  5. Abstract

    This study investigates whether coupling crop modeling and machine learning (ML) improves corn yield predictions in the US Corn Belt. The main objectives are to explore whether a hybrid approach (crop modeling + ML) would result in better predictions, investigate which combinations of hybrid models provide the most accurate predictions, and determine the features from the crop modeling that are most effective to be integrated with ML for corn yield prediction. Five ML models (linear regression, LASSO, LightGBM, random forest, and XGBoost) and six ensemble models have been designed to address the research question. The results suggest that adding simulation crop model variables (APSIM) as input features to ML models can decrease yield prediction root mean squared error (RMSE) from 7 to 20%. Furthermore, we investigated partial inclusion of APSIM features in the ML prediction models and we found soil moisture related APSIM variables are most influential on the ML predictions followed by crop-related and phenology-related variables. Finally, based on feature importance measure, it has been observed that simulated APSIM average drought stress and average water table depth during the growing season are the most important APSIM inputs to ML. This result indicates that weather information alone is not sufficient and ML models need more hydrological inputs to make improved yield predictions.

    more » « less