skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Comparison of UAS-Based Structure-from-Motion and LiDAR for Structural Characterization of Short Broadacre Crops
The use of small unmanned aerial system (UAS)-based structure-from-motion (SfM; photogrammetry) and LiDAR point clouds has been widely discussed in the remote sensing community. Here, we compared multiple aspects of the SfM and the LiDAR point clouds, collected concurrently in five UAS flights experimental fields of a short crop (snap bean), in order to explore how well the SfM approach performs compared with LiDAR for crop phenotyping. The main methods include calculating the cloud-to-mesh distance (C2M) maps between the preprocessed point clouds, as well as computing a multiscale model-to-model cloud comparison (M3C2) distance maps between the derived digital elevation models (DEMs) and crop height models (CHMs). We also evaluated the crop height and the row width from the CHMs and compared them with field measurements for one of the data sets. Both SfM and LiDAR point clouds achieved an average RMSE of ~0.02 m for crop height and an average RMSE of ~0.05 m for row width. The qualitative and quantitative analyses provided proof that the SfM approach is comparable to LiDAR under the same UAS flight settings. However, its altimetric accuracy largely relied on the number and distribution of the ground control points.  more » « less
Award ID(s):
1827551
PAR ID:
10347011
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Remote Sensing
Volume:
13
Issue:
19
ISSN:
2072-4292
Page Range / eLocation ID:
3975
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Models of 12 historic masonry buildings damaged by an EF4 tornado were created by combining Unmanned Aerial Vehicle Structure-from-Motion (UAV-SfM) and Light Detection and Ranging (LiDAR) point clouds. The building models can be used for a myriad of purposes, such as structural analysis. Additionally, the point cloud combination workflow can be applied to other projects. 
    more » « less
  2. Uncrewed aerial systems (UASs) have emerged as powerful ecological observation platforms capable of filling critical spatial and spectral observation gaps in plant physiological and phenological traits that have been difficult to measure from space-borne sensors. Despite recent technological advances, the high cost of drone-borne sensors limits the widespread application of UAS technology across scientific disciplines. Here, we evaluate the tradeoffs between off-the-shelf and sophisticated drone-borne sensors for mapping plant species and plant functional types (PFTs) within a diverse grassland. Specifically, we compared species and PFT mapping accuracies derived from hyperspectral, multispectral, and RGB imagery fused with light detection and ranging (LiDAR) or structure-for-motion (SfM)-derived canopy height models (CHM). Sensor–data fusion were used to consider either a single observation period or near-monthly observation frequencies for integration of phenological information (i.e., phenometrics). Results indicate that overall classification accuracies for plant species and PFTs were highest in hyperspectral and LiDAR-CHM fusions (78 and 89%, respectively), followed by multispectral and phenometric–SfM–CHM fusions (52 and 60%, respectively) and RGB and SfM–CHM fusions (45 and 47%, respectively). Our findings demonstrate clear tradeoffs in mapping accuracies from economical versus exorbitant sensor networks but highlight that off-the-shelf multispectral sensors may achieve accuracies comparable to those of sophisticated UAS sensors by integrating phenometrics into machine learning image classifiers. 
    more » « less
  3. Creating cave maps is an essential part of cave research. Traditional cartographic efforts are extremely time consuming and subjective, motivating the development of new techniques using terrestrial lidar scanners and mobile lidar systems. However, processing the large point clouds from these scanners to produce detailed, yet manageable “maps” remains a challenge. In this work, we present a methodology for synthesizing a basemap representing the cave floor from large scale point clouds, based on a case study of a SLAM-based lidar data acquisition from a cave system in the archaeological site of Las Cuevas, Belize. In 4 days of fieldwork, the 335 m length of the cave system was scanned, resulting in a point cloud of 4.1 billion points, with 1.6 billion points classified as part of the cave floor. This point cloud was processed to produce a basemap that can be used in GIS, where natural and anthropogenic features are clearly visible and can be traced to create accurate 2D maps similar to traditional cartography. 
    more » « less
  4. The ATLAS sensor onboard the ICESat-2 satellite is a photon-counting lidar (PCL) with a primary mission to map Earth's ice sheets. A secondary goal of the mission is to provide vegetation and terrain elevations, which are essential for calculating the planet's biomass carbon reserves. A drawback of ATLAS is that the sensor does not provide reliable terrain height estimates in dense, high-closure forests because only a few photons reach the ground through the canopy and return to the detector. This low penetration translates into lower accuracy for the resultant terrain model. Tropical forest measurements with ATLAS have an additional problem estimating top of canopy because of frequent atmospheric phenomena such as fog and low clouds that can be misinterpreted as top of the canopy. To alleviate these issues, we propose using a ConvPoint neural network for 3D point clouds and high-density airborne lidar as training data to classify vegetation and terrain returns from ATLAS. The semantic segmentation network provides excellent results and could be used in parallel with the current ATL08 noise filtering algorithms, especially in areas with dense vegetation. We use high-density airborne lidar data acquired along ICESat-2 transects in Central American forests as a ground reference for training the neural network to distinguish between noise photons and photons lying between the terrain and the top of the canopy. Each photon event receives a label (noise or signal) in the test phase, providing automated noise-filtering of the ATL03 data. The terrain and top of canopy elevations are subsequently aggregated in 100 m segments using a series of iterative smoothing filters. We demonstrate improved estimates for both terrain and top of canopy elevations compared to the ATL08 100 m segment estimates. The neural network (NN) noise filtering reliably eliminated outlier top of canopy estimates caused by low clouds, and aggregated root mean square error (RMSE) decreased from 7.7 m for ATL08 to 3.7 m for NN prediction (18 test profiles aggregated). For terrain elevations, RMSE decreased from 5.2 m for ATL08 to 3.3 m for the NN prediction, compared to airborne lidar reference profiles.ICESat-2LidarPoint cloudNoise filtering 
    more » « less
  5. Unsupervised machine learning algorithms (clustering, genetic, and principal component analysis) automate Unmanned Aerial Vehicle (UAV) missions as well as the creation and refinement of iterative 3D photogrammetric models with a next best view (NBV) approach. The novel approach uses Structure-from-Motion (SfM) to achieve convergence to a specified orthomosaic resolution by identifying edges in the point cloud and planning cameras that “view” the holes identified by edges without requiring an initial model. This iterative UAV photogrammetric method successfully runs in various Microsoft AirSim environments. Simulated ground sampling distance (GSD) of models reaches as low as 3.4 cm per pixel, and generally, successive iterations improve resolution. Besides analogous application in simulated environments, a field study of a retired municipal water tank illustrates the practical application and advantages of automated UAV iterative inspection of infrastructure using 63 % fewer photographs than a comparable manual flight with analogous density point clouds obtaining a GSD of less than 3 cm per pixel. Each iteration qualitatively increases resolution according to a logarithmic regression, reduces holes in models, and adds details to model edges. 
    more » « less