Abstract Crop phenology regulates seasonal carbon and water fluxes between croplands and the atmosphere and provides essential information for monitoring and predicting crop growth dynamics and productivity. However, under rapid climate change and more frequent extreme events, future changes in crop phenological shifts have not been well investigated and fully considered in earth system modeling and regional climate assessments. Here, we propose an innovative approach combining remote sensing imagery and machine learning (ML) with climate and survey data to predict future crop phenological shifts across the US corn and soybean systems. Specifically, our projected findings demonstrate distinct acceleration patterns—under the RCP 4.5/RCP 8.5 scenarios, corn planting, silking, maturity, and harvesting stages would significantly advance by 0.94/1.66, 1.13/2.45, 0.89/2.68, and 1.04/2.16 days/decade during 2021–2099, respectively. Soybeans exhibit more muted responses with phenological stages showing relatively smaller negative trends (0.59, 1.08, 0.07, and 0.64 days/decade under the RCP 4.5 vs. 1.24, 1.53, 0.92, and 1.04 days/decade under the RCP 8.5). These spatially explicit projections illustrate how crop phenology would respond to future climate change, highlighting widespread and progressively earlier phenological timing. Based on these findings, we call for a specific effort to quantify the cascading effects of future phenology shifts on crop yield and carbon, water, and energy balances and, accordingly, craft targeted adaptive strategies.
more »
« less
Near-Surface and High-Resolution Satellite Time Series for Detecting Crop Phenology
Detecting crop phenology with satellite time series is important to characterize agroecosystem energy-water-carbon fluxes, manage farming practices, and predict crop yields. Despite the advances in satellite-based crop phenological retrievals, interpreting those retrieval characteristics in the context of on-the-ground crop phenological events remains a long-standing hurdle. Over the recent years, the emergence of near-surface phenology cameras (e.g., PhenoCams), along with the satellite imagery of both high spatial and temporal resolutions (e.g., PlanetScope imagery), has largely facilitated direct comparisons of retrieved characteristics to visually observed crop stages for phenological interpretation and validation. The goal of this study is to systematically assess near-surface PhenoCams and high-resolution PlanetScope time series in reconciling sensor- and ground-based crop phenological characterizations. With two critical crop stages (i.e., crop emergence and maturity stages) as an example, we retrieved diverse phenological characteristics from both PhenoCam and PlanetScope imagery for a range of agricultural sites across the United States. The results showed that the curvature-based Greenup and Gu-based Upturn estimates showed good congruence with the visually observed crop emergence stage (RMSE about 1 week, bias about 0–9 days, and R square about 0.65–0.75). The threshold- and derivative-based End of greenness falling Season (i.e., EOS) estimates reconciled well with visual crop maturity observations (RMSE about 5–10 days, bias about 0–8 days, and R square about 0.6–0.75). The concordance among PlanetScope, PhenoCam, and visual phenology demonstrated the potential to interpret the fine-scale sensor-derived phenological characteristics in the context of physiologically well-characterized crop phenological events, which paved the way to develop formal protocols for bridging ground-satellite phenological characterization.
more »
« less
- Award ID(s):
- 2048068
- PAR ID:
- 10322172
- Date Published:
- Journal Name:
- Remote Sensing
- Volume:
- 14
- Issue:
- 9
- ISSN:
- 2072-4292
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Efficient, more accurate reporting of maize ( Zea mays L.) phenology, crop condition, and progress is crucial for agronomists and policy makers. Integration of satellite imagery with machine learning models has shown great potential to improve crop classification and facilitate in-season phenological reports. However, crop phenology classification precision must be substantially improved to transform data into actionable management decisions for farmers and agronomists. An integrated approach utilizing ground truth field data for maize crop phenology (2013–2018 seasons), satellite imagery (Landsat 8), and weather data was explored with the following objectives: (i) model training and validation—identify the best combination of spectral bands, vegetation indices (VIs), weather parameters, geolocation, and ground truth data, resulting in a model with the highest accuracy across years at each season segment (step one) and (ii) model testing—post-selection model performance evaluation for each phenology class with unseen data (hold-out cross-validation) (step two). The best model performance for classifying maize phenology was documented when VIs (NDVI, EVI, GCVI, NDWI, GVMI) and vapor pressure deficit (VPD) were used as input variables. This study supports the integration of field ground truth, satellite imagery, and weather data to classify maize crop phenology, thereby facilitating foundational decision making and agricultural interventions for the different members of the agricultural chain.more » « less
-
Abstract Land surface phenology (LSP) products are currently of large uncertainties due to cloud contaminations and other impacts in temporal satellite observations and they have been poorly validated because of the lack of spatially comparable ground measurements. This study provided a reference dataset of gap-free time series and phenological dates by fusing the Harmonized Landsat 8 and Sentinel-2 (HLS) observations with near-surface PhenoCam time series for 78 regions of 10 × 10 km2across ecosystems in North America during 2019 and 2020. The HLS-PhenoCam LSP (HP-LSP) reference dataset at 30 m pixels is composed of: (1) 3-day synthetic gap-free EVI2 (two-band Enhanced Vegetation Index) time series that are physically meaningful to monitor the vegetation development across heterogeneous levels, train models (e.g., machine learning) for land surface mapping, and extract phenometrics from various methods; and (2) four key phenological dates (accuracy ≤5 days) that are spatially continuous and scalable, which are applicable to validate various satellite-based phenology products (e.g., global MODIS/VIIRS LSP), develop phenological models, and analyze climate impacts on terrestrial ecosystems.more » « less
-
null (Ed.)High-quality retrieval of land surface phenology (LSP) is increasingly important for understanding the effects of climate change on ecosystem function and biosphere–atmosphere interactions. We analyzed four state-of-the-art phenology methods: threshold, logistic-function, moving-average and first derivative based approaches, and retrieved LSP in the North Hemisphere for the period 1999–2017 from Copernicus Global Land Service (CGLS) SPOT-VEGETATION and PROBA-V leaf area index (LAI) 1 km V2.0 time series. We validated the LSP estimates with near-surface PhenoCam and eddy covariance FLUXNET data over 80 sites of deciduous forests. Results showed a strong correlation (R2 > 0.7) between the satellite LSP and ground-based observations from both PhenoCam and FLUXNET for the timing of the start (SoS) and R2 > 0.5 for the end of season (EoS). The threshold-based method performed the best with a root mean square error of ~9 d with PhenoCam and ~7 d with FLUXNET for the timing of SoS (30th percentile of the annual amplitude), and ~12 d and ~10 d, respectively, for the timing of EoS (40th percentile).more » « less
-
null (Ed.)Phenology is a distinct marker of the impacts of climate change on ecosystems. Accordingly, monitoring the spatiotemporal patterns of vegetation phenology is important to understand the changing Earth system. A wide range of sensors have been used to monitor vegetation phenology, including digital cameras with different viewing geometries mounted on various types of platforms. Sensor perspective, view-angle, and resolution can potentially impact estimates of phenology. We compared three different methods of remotely sensing vegetation phenology—an unoccupied aerial vehicle (UAV)-based, downward-facing RGB camera, a below-canopy, upward-facing hemispherical camera with blue (B), green (G), and near-infrared (NIR) bands, and a tower-based RGB PhenoCam, positioned at an oblique angle to the canopy—to estimate spring phenological transition towards canopy closure in a mixed-species temperate forest in central Virginia, USA. Our study had two objectives: (1) to compare the above- and below-canopy inference of canopy greenness (using green chromatic coordinate and normalized difference vegetation index) and canopy structural attributes (leaf area and gap fraction) by matching below-canopy hemispherical photos with high spatial resolution (0.03 m) UAV imagery, to find the appropriate spatial coverage and resolution for comparison; (2) to compare how UAV, ground-based, and tower-based imagery performed in estimating the timing of the spring phenological transition. We found that a spatial buffer of 20 m radius for UAV imagery is most closely comparable to below-canopy imagery in this system. Sensors and platforms agree within +/− 5 days of when canopy greenness stabilizes from the spring phenophase into the growing season. We show that pairing UAV imagery with tower-based observation platforms and plot-based observations for phenological studies (e.g., long-term monitoring, existing research networks, and permanent plots) has the potential to scale plot-based forest structural measures via UAV imagery, constrain uncertainty estimates around phenophases, and more robustly assess site heterogeneity.more » « less
An official website of the United States government

