Abstract Efficient, more accurate reporting of maize ( Zea mays L.) phenology, crop condition, and progress is crucial for agronomists and policy makers. Integration of satellite imagery with machine learning models has shown great potential to improve crop classification and facilitate in-season phenological reports. However, crop phenology classification precision must be substantially improved to transform data into actionable management decisions for farmers and agronomists. An integrated approach utilizing ground truth field data for maize crop phenology (2013–2018 seasons), satellite imagery (Landsat 8), and weather data was explored with the following objectives: (i) model training and validation—identify the best combination of spectral bands, vegetation indices (VIs), weather parameters, geolocation, and ground truth data, resulting in a model with the highest accuracy across years at each season segment (step one) and (ii) model testing—post-selection model performance evaluation for each phenology class with unseen data (hold-out cross-validation) (step two). The best model performance for classifying maize phenology was documented when VIs (NDVI, EVI, GCVI, NDWI, GVMI) and vapor pressure deficit (VPD) were used as input variables. This study supports the integration of field ground truth, satellite imagery, and weather data to classify maize crop phenology, thereby facilitating foundational decision making and agricultural interventions for the different members of the agricultural chain.
more »
« less
Impact of High-Cadence Earth Observation in Maize Crop Phenology Classification
For farmers, policymakers, and government agencies, it is critical to accurately define agricultural crop phenology and its spatial-temporal variability. At the moment, two approaches are utilized to report crop phenology. On one hand, land surface phenology provides information about the overall trend, whereas weekly reports from USDA-NASS provide information about the development of particular crops at the regional level. High-cadence earth observations might help to improve the accuracy of these estimations and bring more precise crop phenology classifications closer to what farmers demand. The second component of the proposed solution requires the use of robust classifiers (e.g., random forest, RF) capable of successfully managing large data sets. To evaluate this solution, this study compared the output of a RF classifier model using weather, two different satellite sources (Planet Fusion; PF and Sentinel-2; S-2), and ground truth data to improve maize (Zea mays L.) crop phenology classification using two regions of Kansas (Southwest and Central) as a testbed during the 2017 growing season. Our findings suggests that high temporal resolution (PF) data can significantly improve crop classification metrics (f1-score = 0.94) relative to S-2 (f1-score = 0.86). Additionally, a decline in the f1-score between 0.74 and 0.60 was obtained when we assessed the ability of S-2 to extend the temporal forecast for crop phenology. This research highlights the critical nature of very high temporal resolution (daily) earth observation data for crop monitoring and decision making in agriculture.
more »
« less
- Award ID(s):
- 1715894
- PAR ID:
- 10440510
- Date Published:
- Journal Name:
- Remote Sensing
- Volume:
- 14
- Issue:
- 3
- ISSN:
- 2072-4292
- Page Range / eLocation ID:
- 469
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Mapping crop types and land cover in smallholder farming systems in sub-Saharan Africa remains a challenge due to data costs, high cloud cover, and poor temporal resolution of satellite data. With improvement in satellite technology and image processing techniques, there is a potential for integrating data from sensors with different spectral characteristics and temporal resolutions to effectively map crop types and land cover. In our Malawi study area, it is common that there are no cloud-free images available for the entire crop growth season. The goal of this experiment is to produce detailed crop type and land cover maps in agricultural landscapes using the Sentinel-1 (S-1) radar data, Sentinel-2 (S-2) optical data, S-2 and PlanetScope data fusion, and S-1 C2 matrix and S-1 H/α polarimetric decomposition. We evaluated the ability to combine these data to map crop types and land cover in two smallholder farming locations. The random forest algorithm, trained with crop and land cover type data collected in the field, complemented with samples digitized from Google Earth Pro and DigitalGlobe, was used for the classification experiments. The results show that the S-2 and PlanetScope fused image + S-1 covariance (C2) matrix + H/α polarimetric decomposition (an entropy-based decomposition method) fusion outperformed all other image combinations, producing higher overall accuracies (OAs) (>85%) and Kappa coefficients (>0.80). These OAs represent a 13.53% and 11.7% improvement on the Sentinel-2-only (OAs < 80%) experiment for Thimalala and Edundu, respectively. The experiment also provided accurate insights into the distribution of crop and land cover types in the area. The findings suggest that in cloud-dense and resource-poor locations, fusing high temporal resolution radar data with available optical data presents an opportunity for operational mapping of crop types and land cover to support food security and environmental management decision-making.more » « less
-
Abstract ObjectivesHuman responses to climate variation have a rich anthropological history. However, much less is known about how people living in small‐scale societies perceive climate change, and what climate data are useful in predicting food production at a scale that affects daily lives. MethodsWe use longitudinal ethnographic interviews and economic data to first ask what aspects of climate variation affect the agricultural cycle and food production for Yucatec Maya farmers. Sixty years of high‐resolution meteorological data and harvest assessments are then used to detect the scale at which climate data predict good and bad crop yields, and to analyze long‐term changes in climate variables critical to food production. ResultsWe find that (a) only local, daily precipitation closely fits the climate pattern described by farmers. Other temporal (annual and monthly) scales miss key information about what farmers find important to successful harvests; (b) at both community‐ and municipal‐levels, heavy late‐season rains associated with tropical storms have the greatest negative impact on crop yields; and (c) in contrast to long‐term patterns from regional and state data, local measures show an increase in rainfall during the late growing season, indicating that fine‐grained data are needed to make accurate inferences about climate trends. ConclusionOur findings highlight the importance to define climate variables at scales appropriate to human behavior. Course‐grained annual, monthly, national, and state‐level data tell us little about climate attributes pertinent to farmers and food production. However, high‐resolution daily, local precipitation data do capture how climate variation shapes food production.more » « less
-
Due to the growing volume of remote sensing data and the low latency required for safe marine navigation, machine learning (ML) algorithms are being developed to accelerate sea ice chart generation, currently a manual interpretation task. However, the low signal-to-noise ratio of the freely available Sentinel-1 Synthetic Aperture Radar (SAR) imagery, the ambiguity of backscatter signals for ice types, and the scarcity of open-source high-resolution labelled data makes automating sea ice mapping challenging. We use Extreme Earth version 2, a high-resolution benchmark dataset generated for ML training and evaluation, to investigate the effectiveness of ML for automated sea ice mapping. Our customized pipeline combines ResNets and Atrous Spatial Pyramid Pooling for SAR image segmentation. We investigate the performance of our model for: i) binary classification of sea ice and open water in a segmentation framework; and ii) a multiclass segmentation of five sea ice types. For binary ice-water classification, models trained with our largest training set have weighted F1 scores all greater than 0.95 for January and July test scenes. Specifically, the median weighted F1 score was 0.98, indicating high performance for both months. By comparison, a competitive baseline U-Net has a weighted average F1 score of ranging from 0.92 to 0.94 (median 0.93) for July, and 0.97 to 0.98 (median 0.97) for January. Multiclass ice type classification is more challenging, and even though our models achieve 2% improvement in weighted F1 average compared to the baseline U-Net, test weighted F1 is generally between 0.6 and 0.80. Our approach can efficiently segment full SAR scenes in one run, is faster than the baseline U-Net, retains spatial resolution and dimension, and is more robust against noise compared to approaches that rely on patch classification.more » « less
-
Detecting crop phenology with satellite time series is important to characterize agroecosystem energy-water-carbon fluxes, manage farming practices, and predict crop yields. Despite the advances in satellite-based crop phenological retrievals, interpreting those retrieval characteristics in the context of on-the-ground crop phenological events remains a long-standing hurdle. Over the recent years, the emergence of near-surface phenology cameras (e.g., PhenoCams), along with the satellite imagery of both high spatial and temporal resolutions (e.g., PlanetScope imagery), has largely facilitated direct comparisons of retrieved characteristics to visually observed crop stages for phenological interpretation and validation. The goal of this study is to systematically assess near-surface PhenoCams and high-resolution PlanetScope time series in reconciling sensor- and ground-based crop phenological characterizations. With two critical crop stages (i.e., crop emergence and maturity stages) as an example, we retrieved diverse phenological characteristics from both PhenoCam and PlanetScope imagery for a range of agricultural sites across the United States. The results showed that the curvature-based Greenup and Gu-based Upturn estimates showed good congruence with the visually observed crop emergence stage (RMSE about 1 week, bias about 0–9 days, and R square about 0.65–0.75). The threshold- and derivative-based End of greenness falling Season (i.e., EOS) estimates reconciled well with visual crop maturity observations (RMSE about 5–10 days, bias about 0–8 days, and R square about 0.6–0.75). The concordance among PlanetScope, PhenoCam, and visual phenology demonstrated the potential to interpret the fine-scale sensor-derived phenological characteristics in the context of physiologically well-characterized crop phenological events, which paved the way to develop formal protocols for bridging ground-satellite phenological characterization.more » « less
An official website of the United States government

