The use of small unmanned aerial system (UAS)-based structure-from-motion (SfM; photogrammetry) and LiDAR point clouds has been widely discussed in the remote sensing community. Here, we compared multiple aspects of the SfM and the LiDAR point clouds, collected concurrently in five UAS flights experimental fields of a short crop (snap bean), in order to explore how well the SfM approach performs compared with LiDAR for crop phenotyping. The main methods include calculating the cloud-to-mesh distance (C2M) maps between the preprocessed point clouds, as well as computing a multiscale model-to-model cloud comparison (M3C2) distance maps between the derived digital elevation models (DEMs) and crop height models (CHMs). We also evaluated the crop height and the row width from the CHMs and compared them with field measurements for one of the data sets. Both SfM and LiDAR point clouds achieved an average RMSE of ~0.02 m for crop height and an average RMSE of ~0.05 m for row width. The qualitative and quantitative analyses provided proof that the SfM approach is comparable to LiDAR under the same UAS flight settings. However, its altimetric accuracy largely relied on the number and distribution of the ground control points.
more »
« less
Deriving Land and Water Surface Elevations in the Northeastern Yucatán Peninsula Using PPK GPS and UAV-Based Structure from Motion
While UAV-based imaging methods such as drone lidar scanning (DLS) and Structure from Motion (SfM) are now widely used in geographic research, accurate water surface elevation (WSE) measurement remains a difficult problem, as water absorbs wavelengths commonly used for lidar and SfM feature matching fails on these dynamic surfaces. We present a methodology for measuring WSE in a particularly challenging environment, the Yucatán Peninsula, where cenotes – exposed, water-filled sinkholes – provide an observation point into the critically important regional groundwater supply. In the northeastern Yucatán, elevations are very close to sea level, the area is of low relief, and the near-vertical edges of the walls of the cenotes complicate the use of the so-called “water edge” technique for WSE measurement. We demonstrate how post-processing kinematic (PPK) correction of even a single Real Time Kinematic (RTK) Global Positioning System (GPS) unit can be used to finely register the SfM-derived point cloud, and present evidence from both simulations and an empirical study that quantify the effect of “dip” in SfM-based environmental reconstructions. Finally, we present a statistical analysis of the problem of “thick” or “fuzzy” point clouds derived from SfM, with particular emphasis on their interactions with WSE measurement.
more »
« less
- Award ID(s):
- 1852290
- PAR ID:
- 10213699
- Date Published:
- Journal Name:
- Papers in Applied Geography
- ISSN:
- 2375-4931
- Page Range / eLocation ID:
- 1 to 22
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
High resolution topographic data are necessary to understand benthic habitat, quantify processes at the water-sediment interface, and support computational fluid dynamics models for both surface and hyporheic hydraulics. In riverine systems, these data are typically collected using traditional surveying methods (total station, DGPS, etc.), airborne or terrestrial laser scanning, and photogrammetry. Recently, handheld surveying equipment has been rapidly acquiring popularity in part due to its processing capacity, price, size, and versatility. One such device is the iPhone LiDAR, which could have a good balance between precision and ease of use and is a potential replacement for conventional measuring tools. Here, we evaluated the accuracy of the LiDAR sensor and a Structure from Motion (SfM) method based on photos collected using the iPhone Cameras. We compared the LiDAR and SfM elevations to those from a high-precision laser scanner for an experimental rough water-worked gravelbed channel with boulder-like structures. We observed that both the LiDAR and SfM methods captured the overall streambed morphology and detected large (Hs 15 cm) and macro (5cm Hs < 15cm) scales of topographic variations (Hs, roughness). The SfM technique also captured small scale (Hs <5cm) roughness whereas the LiDAR consistently simplified it with errors of 3.7 mm.more » « less
-
Models of 12 historic masonry buildings damaged by an EF4 tornado were created by combining Unmanned Aerial Vehicle Structure-from-Motion (UAV-SfM) and Light Detection and Ranging (LiDAR) point clouds. The building models can be used for a myriad of purposes, such as structural analysis. Additionally, the point cloud combination workflow can be applied to other projects.more » « less
-
null (Ed.)Flood occurrence is increasing due to the expansion of urbanization and extreme weather like hurricanes; hence, research on methods of inundation monitoring and mapping has increased to reduce the severe impacts of flood disasters. This research studies and compares two methods for inundation depth estimation using UAV images and topographic data. The methods consist of three main stages: (1) extracting flooded areas and create 2D inundation polygons using deep learning; (2) reconstructing 3D water surface using the polygons and topographic data; and (3) deriving a water depth map using the 3D reconstructed water surface and a pre-flood DEM. The two methods are different at reconstructing the 3D water surface (stage 2). The first method uses structure from motion (SfM) for creating a point cloud of the area from overlapping UAV images, and the water polygons resulted from stage 1 is applied for water point cloud classification. While the second method reconstructs the water surface by intersecting the water polygons and a pre-flood DEM created using the pre-flood LiDAR data. We evaluate the proposed methods for inundation depth mapping over the Town of Princeville during a flooding event during Hurricane Matthew. The methods are compared and validated using the USGS gauge water level data acquired during the flood event. The RMSEs for water depth using the SfM method and integrated method based on deep learning and DEM were 0.34m and 0.26m, respectively.more » « less
-
Uncrewed aerial systems (UASs) have emerged as powerful ecological observation platforms capable of filling critical spatial and spectral observation gaps in plant physiological and phenological traits that have been difficult to measure from space-borne sensors. Despite recent technological advances, the high cost of drone-borne sensors limits the widespread application of UAS technology across scientific disciplines. Here, we evaluate the tradeoffs between off-the-shelf and sophisticated drone-borne sensors for mapping plant species and plant functional types (PFTs) within a diverse grassland. Specifically, we compared species and PFT mapping accuracies derived from hyperspectral, multispectral, and RGB imagery fused with light detection and ranging (LiDAR) or structure-for-motion (SfM)-derived canopy height models (CHM). Sensor–data fusion were used to consider either a single observation period or near-monthly observation frequencies for integration of phenological information (i.e., phenometrics). Results indicate that overall classification accuracies for plant species and PFTs were highest in hyperspectral and LiDAR-CHM fusions (78 and 89%, respectively), followed by multispectral and phenometric–SfM–CHM fusions (52 and 60%, respectively) and RGB and SfM–CHM fusions (45 and 47%, respectively). Our findings demonstrate clear tradeoffs in mapping accuracies from economical versus exorbitant sensor networks but highlight that off-the-shelf multispectral sensors may achieve accuracies comparable to those of sophisticated UAS sensors by integrating phenometrics into machine learning image classifiers.more » « less
An official website of the United States government

