skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Mapping Quaking Aspen Using Seasonal Sentinel-1 and Sentinel-2 Composite Imagery across the Southern Rockies, USA
Quaking aspen is an important deciduous tree species across interior western U.S. forests. Existing maps of aspen distribution are based on Landsat imagery and often miss small stands (<0.09 ha or 30 m2), which rapidly regrow when managed or following disturbance. In this study, we present methods for deriving a new regional map of aspen forests using one year of Sentinel-1 (S1) and Sentinel-2 (S2) imagery in Google Earth Engine. Using observed annual phenology of aspen across the Southern Rockies and leveraging the frequent temporal resolution of S1 and S2, ecologically relevant seasonal imagery composites were developed. We derived spectral indices and radar textural features targeting the canopy structure, moisture, and chlorophyll content. Using spatial block cross-validation and Random Forests, we assessed the accuracy of different scenarios and selected the best-performing set of features for classification. Comparisons were then made with existing landcover products across the study region. The resulting map improves on existing products in both accuracy (0.93 average F1-score) and detection of smaller forest patches. These methods enable accurate mapping at spatial and temporal scales relevant to forest management for one of the most widely distributed tree species in North America.  more » « less
Award ID(s):
2153040
PAR ID:
10591767
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Remote Sensing
Volume:
16
Issue:
9
ISSN:
2072-4292
Page Range / eLocation ID:
1619
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The spatial distribution of forest stands is one of the fundamental properties of forests. Timely and accurately obtained stand distribution can help people better understand, manage, and utilize forests. The development of remote sensing technology has made it possible to map the distribution of tree species in a timely and accurate manner. At present, a large amount of remote sensing data have been accumulated, including high-spatial-resolution images, time-series images, light detection and ranging (LiDAR) data, etc. However, these data have not been fully utilized. To accurately identify the tree species of forest stands, various and complementary data need to be synthesized for classification. A curve matching based method called the fusion of spectral image and point data (FSP) algorithm was developed to fuse high-spatial-resolution images, time-series images, and LiDAR data for forest stand classification. In this method, the multispectral Sentinel-2 image and high-spatial-resolution aerial images were first fused. Then, the fused images were segmented to derive forest stands, which are the basic unit for classification. To extract features from forest stands, the gray histogram of each band was extracted from the aerial images. The average reflectance in each stand was calculated and stacked for the time-series images. The profile curve of forest structure was generated from the LiDAR data. Finally, the features of forest stands were compared with training samples using curve matching methods to derive the tree species. The developed method was tested in a forest farm to classify 11 tree species. The average accuracy of the FSP method for ten performances was between 0.900 and 0.913, and the maximum accuracy was 0.945. The experiments demonstrate that the FSP method is more accurate and stable than traditional machine learning classification methods. 
    more » « less
  2. null (Ed.)
    Coastal mangrove forests provide important ecosystem goods and services, including carbon sequestration, biodiversity conservation, and hazard mitigation. However, they are being destroyed at an alarming rate by human activities. To characterize mangrove forest changes, evaluate their impacts, and support relevant protection and restoration decision making, accurate and up-to-date mangrove extent mapping at large spatial scales is essential. Available large-scale mangrove extent data products use a single machine learning method commonly with 30 m Landsat imagery, and significant inconsistencies remain among these data products. With huge amounts of satellite data involved and the heterogeneity of land surface characteristics across large geographic areas, finding the most suitable method for large-scale high-resolution mangrove mapping is a challenge. The objective of this study is to evaluate the performance of a machine learning ensemble for mangrove forest mapping at 20 m spatial resolution across West Africa using Sentinel-2 (optical) and Sentinel-1 (radar) imagery. The machine learning ensemble integrates three commonly used machine learning methods in land cover and land use mapping, including Random Forest (RF), Gradient Boosting Machine (GBM), and Neural Network (NN). The cloud-based big geospatial data processing platform Google Earth Engine (GEE) was used for pre-processing Sentinel-2 and Sentinel-1 data. Extensive validation has demonstrated that the machine learning ensemble can generate mangrove extent maps at high accuracies for all study regions in West Africa (92%–99% Producer’s Accuracy, 98%–100% User’s Accuracy, 95%–99% Overall Accuracy). This is the first-time that mangrove extent has been mapped at a 20 m spatial resolution across West Africa. The machine learning ensemble has the potential to be applied to other regions of the world and is therefore capable of producing high-resolution mangrove extent maps at global scales periodically. 
    more » « less
  3. Despite providing many valuable ecosystem services, seagrasses are a threatened habitat and their global distribution is not fully known. For example, Venezuela lacks a national seagrass map. An established regional mapping approach for seagrass exists for the Google Earth Engine (GEE) platform, but requires a long time window to obtain sufficient data to overcome cloud and other challenges. Recently, GEE has released a Cloud Score+ quality band product for the purpose of cloud masking. Cloud masking could potentially reduce the time window needed for a representative multitemporal composite, which would allow for temporal analyses. We compare the performance of Cloud Score+ derived products against previously established multitemporal image composites acquired in different time ranges, and the ACOLITE‐processed single image composite. The Sentinel‐2 (S2) Level‐1C (L1C) imagery for the whole Venezuelan coastline was processed following three different approaches: (a) using a multitemporal composition of the full S2 L1C archive available and processed in GEE using the Dark Object Subtraction; (b) integrating Cloud Score+ data set into the previous approach; and (c) using a single‐image offline approach applying ACOLITE atmospheric correction. Additional raster features were generated and a two‐step classification approach was performed with five classes, namely sand, seagrass, turbid water, deep water, and coral, and bootstrapped 20 times. Quantitatively, the performance within the Cloud Score+ derived products were largely similar. While the full archive approach had the best quantitative results, the ACOLITE approach produced the best maps qualitatively. With this, we produced the first national seagrass map for Venezuela. 
    more » « less
  4. Wildfires in the Arctic-boreal zone have increased in frequency over recent decades, carrying substantial ecological, social, and economic consequences. Remote sensing is crucial for mapping burned areas, monitoring wildfire dynamics, and evaluating their impacts. However, existing high-latitude burned area products suffer from significant discrepancies, particularly in Siberia, and their coarse spatial resolutions limit accuracy and utility. To address these gaps, we developed a convolutional neural network model to map burned areas at a 30-meter resolution across the Arctic-boreal zone using Landsat and Sentinel-2 imagery. Our model achieved promising results, with an Intersection Over Union (IOU) of 0.77 and an F1 score of 0.85 on unseen test data, performing better in North America (IOU=0.84) than Eurasia (IOU=0.72) due to differences in fire regimes and data quality. Predictions for six representative years showed our model’s burned area closely matched the median values of Landsat, MODIS, and VIIRS-based products, although alignment varied annually and spatially. Visual assessments indicated our approach was generally more accurate, notably in detecting unburned vegetation islands within fire perimeters missed by other products. This research has numerous potential applications, such as analyzing feedback between vegetation and burn patterns, characterizing spatial dynamics of unburned islands, and improving carbon emission estimates through detailed burn severity assessments. Here we have provided the primary series of scripts used to achieve the above results. In these scripts we use historical vector fire polygons to download imagery from Landsat 5, 7, 8, 9 and Sentinel-2 to train a deep learning model called a UNet++ in the Arctic-boreal zone. Imagery is downloaded from Google Earth Engine, while all other processing is done locally. The series of 6 scripts describes main steps from downloading training data, pre-processing it, training the model, and applying the model across the Arctic Boreal Zone. All scripting is done in python through .py scripts and Jupyter notebooks (.ipynb). Our study area includes Alaska, Canada and Eurasia, and we trained our model on all historical fire polygons from 1985-2020. 
    more » « less
  5. The ability to automatically delineate individual tree crowns using remote sensing data opens the possibility to collect detailed tree information over large geographic regions. While individual tree crown delineation (ITCD) methods have proven successful in conifer-dominated forests using Light Detection and Ranging (LiDAR) data, it remains unclear how well these methods can be applied in deciduous broadleaf-dominated forests. We applied five automated LiDAR-based ITCD methods across fifteen plots ranging from conifer- to broadleaf-dominated forest stands at Harvard Forest in Petersham, MA, USA, and assessed accuracy against manual delineation of crowns from unmanned aerial vehicle (UAV) imagery. We then identified tree- and plot-level factors influencing the success of automated delineation techniques. There was relatively little difference in accuracy between automated crown delineation methods (51–59% aggregated plot accuracy) and, despite parameter tuning, none of the methods produced high accuracy across all plots (27—90% range in plot-level accuracy). The accuracy of all methods was significantly higher with increased plot conifer fraction, and individual conifer trees were identified with higher accuracy (mean 64%) than broadleaf trees (42%) across methods. Further, while tree-level factors (e.g., diameter at breast height, height and crown area) strongly influenced the success of crown delineations, the influence of plot-level factors varied. The most important plot-level factor was species evenness, a metric of relative species abundance that is related to both conifer fraction and the degree to which trees can fill canopy space. As species evenness decreased (e.g., high conifer fraction and less efficient filling of canopy space), the probability of successful delineation increased. Overall, our work suggests that the tested LiDAR-based ITCD methods perform equally well in a mixed temperate forest, but that delineation success is driven by forest characteristics like functional group, tree size, diversity, and crown architecture. While LiDAR-based ITCD methods are well suited for stands with distinct canopy structure, we suggest that future work explore the integration of phenology and spectral characteristics with existing LiDAR as an approach to improve crown delineation in broadleaf-dominated stands. 
    more » « less