Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract. We present the Fire Inventory from National Center for Atmospheric Research (NCAR) version 2.5 (FINNv2.5), a fire emissions inventory that provides publicly available emissions of trace gases and aerosols for various applications, including use in global and regional atmospheric chemistry modeling. FINNv2.5 includes numerous updates to the FINN version 1 framework to better represent burned area, vegetation burned, and chemicals emitted. Major changes include the use of active fire detections from the Visible Infrared Imaging Radiometer Suite (VIIRS) at 375 m spatial resolution, which allows smaller fires to be included in the emissions processing. The calculation of burned area has been updated such that a more rigorous approach is used to aggregate fire detections, which better accounts for larger fires and enables using multiple satellite products simultaneously for emissions estimates. Fuel characterization and emissions factors have also been updated in FINNv2.5. Daily fire emissions for many trace gases and aerosols are determined for 2002–2019 (Moderate Resolution Imaging Spectroradiometer (MODIS)-only fire detections) and 2012–2019 (MODIS + VIIRS fire detections). The non-methane organic gas emissions are allocated to the species of several commonly used chemical mechanisms. We compare FINNv2.5 emissions against other widely used fire emissions inventories. The performance of FINNv2.5 emissions as inputs to a chemical transport model is assessed with satellite observations. Uncertainties in the emissions estimates remain, particularly in Africa and South America during August–October and in southeast and equatorial Asia in March and April. Recommendations for future evaluation and use are given.more » « less
-
Data on individual tree crowns from remote sensing have the potential to advance forest ecology by providing information about forest composition and structure with a continuous spatial coverage over large spatial extents. Classifying individual trees to their taxonomic species over large regions from remote sensing data is challenging. Methods to classify individual species are often accurate for common species, but perform poorly for less common species and when applied to new sites. We ran a data science competition to help identify effective methods for the task of classification of individual crowns to species identity. The competition included data from three sites to assess each methods’ ability to generalize patterns across two sites simultaneously and apply methods to an untrained site. Three different metrics were used to assess and compare model performance. Six teams participated, representing four countries and nine individuals. The highest performing method from a previous competition in 2017 was applied and used as a baseline to understand advancements and changes in successful methods. The best species classification method was based on a two-stage fully connected neural network that significantly outperformed the baseline random forest and gradient boosting ensemble methods. All methods generalized well by showing relatively strong performance on the trained sites (accuracy = 0.46–0.55, macro F1 = 0.09–0.32, cross entropy loss = 2.4–9.2), but generally failed to transfer effectively to the untrained site (accuracy = 0.07–0.32, macro F1 = 0.02–0.18, cross entropy loss = 2.8–16.3). Classification performance was influenced by the number of samples with species labels available for training, with most methods predicting common species at the training sites well (maximum F1 score of 0.86) relative to the uncommon species where none were predicted. Classification errors were most common between species in the same genus and different species that occur in the same habitat. Most methods performed better than the baseline in detecting if a species was not in the training data by predicting an untrained mixed-species class, especially in the untrained site. This work has highlighted that data science competitions can encourage advancement of methods, particularly by bringing in new people from outside the focal discipline, and by providing an open dataset and evaluation criteria from which participants can learn.more » « less
-
Distance is the most fundamental metric in spatial analysis and modeling. Planar distance and geodesic distance are the common distance measurements in current geographic information systems and geospatial analytic tools. However, there is little understanding about how to measure distance in a digital terrain surface and the uncertainty of the measurement. To fill this gap, this study applies a Monte‐Carlo simulation to evaluate seven surface‐adjustment methods for distance measurement in digital terrain model. Using parallel computing techniques and a memory optimization method, the processing time for the distances calculation of 6,000 simulated transects has been reduced to a manageable level. The accuracy and computational efficiency of the surface‐adjustment methods were systematically compared in six study areas with various terrain types and in digital elevation models in different resolutions. Major findings of this study indicate a trade‐off between measurement accuracy and computational efficiency: calculations at finer resolution DEMs improve measurement accuracy but increase processing times. Among the methods compared, the weighted average demonstrates highest accuracy and second fastest processing time. Additionally, the choice of surface adjustment method has a greater impact on the accuracy of distance measurements in rougher terrain.more » « less
-
Airborne remote sensing offers unprecedented opportunities to efficiently monitor vegetation, but methods to delineate and classify individual plant species using the collected data are still actively being developed and improved. The Integrating Data science with Trees and Remote Sensing (IDTReeS) plant identification competition openly invited scientists to create and compare individual tree mapping methods. Participants were tasked with training taxon identification algorithms based on two sites, to then transfer their methods to a third unseen site, using field-based plant observations in combination with airborne remote sensing image data products from the National Ecological Observatory Network (NEON). These data were captured by a high resolution digital camera sensitive to red, green, blue (RGB) light, hyperspectral imaging spectrometer spanning the visible to shortwave infrared wavelengths, and lidar systems to capture the spectral and structural properties of vegetation. As participants in the IDTReeS competition, we developed a two-stage deep learning approach to integrate NEON remote sensing data from all three sensors and classify individual plant species and genera. The first stage was a convolutional neural network that generates taxon probabilities from RGB images, and the second stage was a fusion neural network that “learns” how to combine these probabilities with hyperspectral and lidar data. Our two-stage approach leverages the ability of neural networks to flexibly and automatically extract descriptive features from complex image data with high dimensionality. Our method achieved an overall classification accuracy of 0.51 based on the training set, and 0.32 based on the test set which contained data from an unseen site with unknown taxa classes. Although transferability of classification algorithms to unseen sites with unknown species and genus classes proved to be a challenging task, developing methods with openly available NEON data that will be collected in a standardized format for 30 years allows for continual improvements and major gains for members of the computational ecology community. We outline promising directions related to data preparation and processing techniques for further investigation, and provide our code to contribute to open reproducible science efforts.more » « less
-
null (Ed.)Accurately mapping tree species composition and diversity is a critical step towards spatially explicit and species-specific ecological understanding. The National Ecological Observatory Network (NEON) is a valuable source of open ecological data across the United States. Freely available NEON data include in-situ measurements of individual trees, including stem locations, species, and crown diameter, along with the NEON Airborne Observation Platform (AOP) airborne remote sensing imagery, including hyperspectral, multispectral, and light detection and ranging (LiDAR) data products. An important aspect of predicting species using remote sensing data is creating high-quality training sets for optimal classification purposes. Ultimately, manually creating training data is an expensive and time-consuming task that relies on human analyst decisions and may require external data sets or information. We combine in-situ and airborne remote sensing NEON data to evaluate the impact of automated training set preparation and a novel data preprocessing workflow on classifying the four dominant subalpine coniferous tree species at the Niwot Ridge Mountain Research Station forested NEON site in Colorado, USA. We trained pixel-based Random Forest (RF) machine learning models using a series of training data sets along with remote sensing raster data as descriptive features. The highest classification accuracies, 69% and 60% based on internal RF error assessment and an independent validation set, respectively, were obtained using circular tree crown polygons created with half the maximum crown diameter per tree. LiDAR-derived data products were the most important features for species classification, followed by vegetation indices. This work contributes to the open development of well-labeled training data sets for forest composition mapping using openly available NEON data without requiring external data collection, manual delineation steps, or site-specific parameters.more » « less