Coastal wetlands, especially tidal marshes, play a crucial role in supporting ecosystems and slowing shoreline erosion. Accurate and cost-effective identification and classification of various marshtypes, such as high and low marshes, are important for effective coastal management and conservation endeavors. However, mapping tidal marshes is challenging due to heterogeneous coastal vegetation and dynamic tidal influences. In this study, we employ a deep learning segmentation model to automate the identification and classification of tidal marsh communities in coastal Virginia, USA, using seasonal, publicly available satellite and aerial images. This study leverages the combined capabilities of Sentinel-2 and National Agriculture Imagery Program (NAIP)imagery and a UNet architecture to accurately classify tidal marsh communities. We illustrate that by leveraging features learned from data abundant regions and small quantities of high-quality training data collected from the target region, an accuracy as high as 88% can be achieved in the classification of marsh types, specifically high marsh and low marsh, at a spatial resolution of 0.6 m.This study contributes to the field of marsh mapping by highlighting the potential of combining multispectral satellite imagery and deep learning for accurate and efficient marsh type classification 
                        more » 
                        « less   
                    This content will become publicly available on July 1, 2026
                            
                            Combining Open-Source Machine Learning and Publicly Available Aerial Data (NAIP and NEON) to Achieve High-Resolution High-Accuracy Remote Sensing of Grass–Shrub–Tree Mosaics
                        
                    
    
            Woody plant encroachment (WPE) is transforming grasslands globally, yet accurately mapping this process remains challenging. State-funded, publicly available high-resolution aerial imagery offers a potential solution, including the USDA’s National Agriculture Imagery Program (NAIP) and NSF’s National Ecological Observatory Network (NEON) Aerial Observation Platform (AOP). We evaluated the accuracy of land cover classification using NAIP, NEON, and both sources combined. We compared two machine learning models—support vector machines and random forests—implemented in R using large training and evaluation data sets. Our study site, Konza Prairie Biological Station, is a long-term experiment in which variable fire and grazing have created mosaics of herbaceous plants, shrubs, deciduous trees, and evergreen trees (Juniperus virginiana). All models achieved high overall accuracy (>90%), with NEON slightly outperforming NAIP. NAIP underperformed in detecting evergreen trees (52–78% vs. 83–86% accuracy with NEON). NEON models relied on LiDAR-based canopy height data, whereas NAIP relied on multispectral bands. Combining data from both platforms yielded the best results, with 97.7% overall accuracy. Vegetation indices contributed little to model accuracy, including NDVI (normalized digital vegetation index) and EVI (enhanced vegetation index). Both machine learning methods achieved similar accuracy. Our results demonstrate that free, high-resolution imagery and open-source tools can enable accurate, high-resolution, landscape-scale WPE monitoring. Broader adoption of such approaches could substantially improve the monitoring and management of grassland biodiversity, ecosystem function, ecosystem services, and environmental resilience. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2025849
- PAR ID:
- 10634205
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Remote Sensing
- Volume:
- 17
- Issue:
- 13
- ISSN:
- 2072-4292
- Page Range / eLocation ID:
- 2224
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Unmanned aerial vehicles (UAVs) equipped with multispectral sensors offer high spatial and temporal resolution imagery for monitoring crop stress at early stages of development. Analysis of UAV-derived data with advanced machine learning models could improve real-time management in agricultural systems, but guidance for this integration is currently limited. Here we compare two deep learning-based strategies for early warning detection of crop stress, using multitemporal imagery throughout the growing season to predict field-scale yield in irrigated rice in eastern Arkansas. Both deep learning strategies showed improvements upon traditional statistical learning approaches including linear regression and gradient boosted decision trees. First, we explicitly accounted for variation across developmental stages using a 3D convolutional neural network (CNN) architecture that captures both spatial and temporal dimensions of UAV images from multiple time points throughout one growing season. 3D-CNNs achieved low prediction error on the test set, with a Root Mean Squared Error (RMSE) of 8.8% of the mean yield. For the second strategy, a 2D-CNN, we considered only spatial relationships among pixels for image features acquired during a single flyover. 2D-CNNs trained on images from a single day were most accurate when images were taken during booting stage or later, with RMSE ranging from 7.4 to 8.2% of the mean yield. A primary benefit of convolutional autoencoder-like models (based on analyses of prediction maps and feature importance) is the spatial denoising effect that corrects yield predictions for individual pixels based on the values of vegetation index and thermal features for nearby pixels. Our results highlight the promise of convolutional autoencoders for UAV-based yield prediction in rice.more » « less
- 
            Abstract Located at northern latitudes and subject to large seasonal temperature fluctuations, boreal forests are sensitive to the changing climate, with evidence for both increasing and decreasing productivity, depending upon conditions. Optical remote sensing of vegetation indices based on spectral reflectance offers a means of monitoring vegetation photosynthetic activity and provides a powerful tool for observing how boreal forests respond to changing environmental conditions. Reflectance‐based remotely sensed optical signals at northern latitude or high‐altitude regions are readily confounded by snow coverage, hampering applications of satellite‐based vegetation indices in tracking vegetation productivity at large scales. Unraveling the effects of snow can be challenging from satellite data, particularly when validation data are lacking. In this study, we established an experimental system in Alberta, Canada including six boreal tree species, both evergreen and deciduous, to evaluate the confounding effects of snow on three vegetation indices: the normalized difference vegetation index (NDVI), the photochemical reflectance index (PRI), and the chlorophyll/carotenoid index (CCI), all used in tracking vegetation productivity for boreal forests. Our results revealed substantial impacts of snow on canopy reflectance and vegetation indices, expressed as increased albedo, decreased NDVI values and increased PRI and CCI values. These effects varied among species and functional groups (evergreen and deciduous) and different vegetation indices were affected differently, indicating contradictory, confounding effects of snow on these indices. In addition to snow effects, we evaluated the contribution of deciduous trees to vegetation indices in mixed stands of evergreen and deciduous species, which contribute to the observed relationship between greenness‐based indices and ecosystem productivity of many evergreen‐dominated forests that contain a deciduous component. Our results demonstrate confounding and interacting effects of snow and vegetation type on vegetation indices and illustrate the importance of explicitly considering snow effects in any global‐scale photosynthesis monitoring efforts using remotely sensed vegetation indices.more » « less
- 
            Abstract Vegetation phenology is a key control on water, energy, and carbon fluxes in terrestrial ecosystems. Because vegetation canopies are heterogeneous, spatially explicit information related to seasonality in vegetation activity provides valuable information for studies that use eddy covariance measurements to study ecosystem function and land-atmosphere interactions. Here we present a land surface phenology (LSP) dataset derived at 3 m spatial resolution from PlanetScope imagery across a range of plant functional types and climates in North America. The dataset provides spatially explicit information related to the timing of phenophase changes such as the start, peak, and end of vegetation activity, along with vegetation index metrics and associated quality assurance flags for the growing seasons of 2017–2021 for 10 × 10 km windows centred over 104 eddy covariance towers at AmeriFlux and National Ecological Observatory Network (NEON) sites. These LSP data can be used to analyse processes controlling the seasonality of ecosystem-scale carbon, water, and energy fluxes, to evaluate predictions from land surface models, and to assess satellite-based LSP products.more » « less
- 
            The microtopography associated with ice-wedge polygons governs many aspects of Arctic ecosystem, permafrost, and hydrologic dynamics from local to regional scales owing to the linkages between microtopography and the flow and storage of water, vegetation succession, and permafrost dynamics. Wide-spread ice-wedge degradation is transforming low-centered polygons into high-centered polygons at an alarming rate. Accurate data on spatial distribution of ice-wedge polygons at a pan-Arctic scale are not yet available, despite the availability of sub-meter-scale remote sensing imagery. This is because the necessary spatial detail quickly produces data volumes that hamper both manual and semi-automated mapping approaches across large geographical extents. Accordingly, transforming big imagery into ‘science-ready’ insightful analytics demands novel image-to-assessment pipelines that are fueled by advanced machine learning techniques and high-performance computational resources. In this exploratory study, we tasked a deep-learning driven object instance segmentation method (i.e., the Mask R-CNN) with delineating and classifying ice-wedge polygons in very high spatial resolution aerial orthoimagery. We conducted a systematic experiment to gauge the performances and interoperability of the Mask R-CNN across spatial resolutions (0.15 m to 1 m) and image scene contents (a total of 134 km2) near Nuiqsut, Northern Alaska. The trained Mask R-CNN reported mean average precisions of 0.70 and 0.60 at thresholds of 0.50 and 0.75, respectively. Manual validations showed that approximately 95% of individual ice-wedge polygons were correctly delineated and classified, with an overall classification accuracy of 79%. Our findings show that the Mask R-CNN is a robust method to automatically identify ice-wedge polygons from fine-resolution optical imagery. Overall, this automated imagery-enabled intense mapping approach can provide a foundational framework that may propel future pan-Arctic studies of permafrost thaw, tundra landscape evolution, and the role of high latitudes in the global climate system.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
