skip to main content

Title: How to Measure Distance on a Digital Terrain Surface and Why it Matters in Geographical Analysis
Distance is the most fundamental metric in spatial analysis and modeling. Planar distance and geodesic distance are the common distance measurements in current geographic information systems and geospatial analytic tools. However, there is little understanding about how to measure distance in a digital terrain surface and the uncertainty of the measurement. To fill this gap, this study applies a Monte‐Carlo simulation to evaluate seven surface‐adjustment methods for distance measurement in digital terrain model. Using parallel computing techniques and a memory optimization method, the processing time for the distances calculation of 6,000 simulated transects has been reduced to a manageable level. The accuracy and computational efficiency of the surface‐adjustment methods were systematically compared in six study areas with various terrain types and in digital elevation models in different resolutions. Major findings of this study indicate a trade‐off between measurement accuracy and computational efficiency: calculations at finer resolution DEMs improve measurement accuracy but increase processing times. Among the methods compared, the weighted average demonstrates highest accuracy and second fastest processing time. Additionally, the choice of surface adjustment method has a greater impact on the accuracy of distance measurements in rougher terrain.
Authors:
; ;
Award ID(s):
1853866
Publication Date:
NSF-PAR ID:
10197781
Journal Name:
Geographical Analysis
ISSN:
0016-7363
Sponsoring Org:
National Science Foundation
More Like this
  1. The topic of this paper is the airborne evaluation of ICESat-2 Advanced Topographic Laser Altimeter System (ATLAS) measurement capabilities and surface-height-determination over crevassed glacial terrain, with a focus on the geodetical accuracy of geophysical data collected from a helicopter. To obtain surface heights over crevassed and otherwise complex ice surface, ICESat-2 data are analyzed using the density-dimension algorithm for ice surfaces (DDA-ice), which yields surface heights at the nominal 0.7 m along-track spacing of ATLAS data. As the result of an ongoing surge, Negribreen, Svalbard, provided an ideal situation for the validation objectives in 2018 and 2019, because many different crevasse types and morphologically complex ice surfaces existed in close proximity. Airborne geophysical data, including laser altimeter data (profilometer data at 905 nm frequency), differential Global Positioning System (GPS), Inertial Measurement Unit (IMU) data, on-board-time-lapse imagery and photographs, were collected during two campaigns in summers of 2018 and 2019. Airborne experiment setup, geodetical correction and data processing steps are described here. To date, there is relatively little knowledge of the geodetical accuracy that can be obtained from kinematic data collection from a helicopter. Our study finds that (1) Kinematic GPS data collection with correction in post-processing yields higher accuracies thanmore »Real-Time-Kinematic (RTK) data collection. (2) Processing of only the rover data using the Natural Resources Canada Spatial Reference System Precise Point Positioning (CSRS-PPP) software is sufficiently accurate for the sub-satellite validation purpose. (3) Distances between ICESat-2 ground tracks and airborne ground tracks were generally better than 25 m, while distance between predicted and actual ICESat-2 ground track was on the order of 9 m, which allows direct comparison of ice-surface heights and spatial statistical characteristics of crevasses from the satellite and airborne measurements. (4) The Lasertech Universal Laser System (ULS), operated at up to 300 m above ground level, yields full return frequency (400 Hz) and 0.06–0.08 m on-ice along-track spacing of height measurements. (5) Cross-over differences of airborne laser altimeter data are −0.172 ± 2.564 m along straight paths, which implies a precision of approximately 2.6 m for ICESat-2 validation experiments in crevassed terrain. (6) In summary, the comparatively light-weight experiment setup of a suite of small survey equipment mounted on a Eurocopter (Helicopter AS-350) and kinematic GPS data analyzed in post-processing using CSRS-PPP leads to high accuracy repeats of the ICESat-2 tracks. The technical results (1)–(6) indicate that direct comparison of ice-surface heights and crevasse depths from the ICESat-2 and airborne laser altimeter data is warranted. Numerical evaluation of height comparisons utilizes spatial surface roughness measures. The final result of the validation is that ICESat-2 ATLAS data, analyzed with the DDA-ice, facilitate surface-height determination over crevassed terrain, in good agreement with airborne data, including spatial characteristics, such as surface roughness, crevasse spacing and depth, which are key informants on the deformation and dynamics of a glacier during surge.« less
  2. Parallel-laser photogrammetry is growing in popularity as a way to collect non-invasive body size data from wild mammals. Despite its many appeals, this method requires researchers to hand-measure (i) the pixel distance between the parallel laser spots (inter-laser distance) to produce a scale within the image, and (ii) the pixel distance between the study subject’s body landmarks (inter-landmark distance). This manual effort is time-consuming and introduces human error: a researcher measuring the same image twice will rarely return the same values both times (resulting in within-observer error), as is also the case when two researchers measure the same image (resulting in between-observer error). Here, we present two independent methods that automate the inter-laser distance measurement of parallel-laser photogrammetry images. One method uses machine learning and image processing techniques in Python, and the other uses image processing techniques in ImageJ. Both of these methods reduce labor and increase precision without sacrificing accuracy. We first introduce the workflow of the two methods. Then, using two parallel-laser datasets of wild mountain gorilla and wild savannah baboon images, we validate the precision of these two automated methods relative to manual measurements and to each other. We also estimate the reduction of variation in finalmore »body size estimates in centimeters when adopting these automated methods, as these methods have no human error. Finally, we highlight the strengths of each method, suggest best practices for adopting either of them, and propose future directions for the automation of parallel-laser photogrammetry data.« less
  3. Abstract

    Decision trees are a widely used method for classification, both alone and as the building blocks of multiple different ensemble learning methods. The Max Cut decision tree introduced here involves novel modifications to a standard, baseline variant of a classification decision tree, CART Gini. One modification involves an alternative splitting metric, Max Cut, based on maximizing the distance between all pairs of observations that belong to separate classes and separate sides of the threshold value. The other modification, Node Means PCA, selects the decision feature from a linear combination of the input features constructed using an adjustment to principal component analysis (PCA) locally at each node. Our experiments show that this node-based, localized PCA with the Max Cut splitting metric can dramatically improve classification accuracy while also significantly decreasing computational time compared to the CART Gini decision tree. These improvements are most significant for higher-dimensional datasets. For the example dataset CIFAR-100, the modifications enabled a 49% improvement in accuracy, relative to CART Gini, while providing a$$6.8 \times$$6.8×speed up compared to the Scikit-Learn implementation of CART Gini. These introduced modifications are expected to dramatically advance the capabilities of decision trees for difficult classification tasks.

  4. Latest algorithmic development has brought competitive classification accuracy for neural networks despite constraining the network parameters to ternary or binary representations. These findings show significant optimization opportunities to replace computationally-intensive convolution operations (based on multiplication) with more efficient and less complex operations such as addition. In hardware implementation domain, processing-in-memory architecture is becoming a promising solution to alleviate enormous energy-hungry data communication between memory and processing units, bringing considerable improvement for system performance and energy efficiency while running such large networks. In this paper, we review several of our recent works regarding Processing-in-Memory (PIM) accelerator based on Magnetic Random Access Memory computational sub-arrays to accelerate the inference mode of quantized neural networks using digital non-volatile memory rather than using analog crossbar operation. In this way, we investigate the performance of two distinct in-memory addition schemes compared to other digital methods based on processing-in-DRAM/GPU/ASIC design to tackle DNN power and memory wall bottleneck.
  5. Urban flooding is a major natural disaster that poses a serious threat to the urban environment. It is highly demanded that the flood extent can be mapped in near real-time for disaster rescue and relief missions, reconstruction efforts, and financial loss evaluation. Many efforts have been taken to identify the flooding zones with remote sensing data and image processing techniques. Unfortunately, the near real-time production of accurate flood maps over impacted urban areas has not been well investigated due to three major issues. (1) Satellite imagery with high spatial resolution over urban areas usually has nonhomogeneous background due to different types of objects such as buildings, moving vehicles, and road networks. As such, classical machine learning approaches hardly can model the spatial relationship between sample pixels in the flooding area. (2) Handcrafted features associated with the data are usually required as input for conventional flood mapping models, which may not be able to fully utilize the underlying patterns of a large number of available data. (3) High-resolution optical imagery often has varied pixel digital numbers (DNs) for the same ground objects as a result of highly inconsistent illumination conditions during a flood. Accordingly, traditional methods of flood mapping have majormore »limitations in generalization based on testing data. To address the aforementioned issues in urban flood mapping, we developed a patch similarity convolutional neural network (PSNet) using satellite multispectral surface reflectance imagery before and after flooding with a spatial resolution of 3 meters. We used spectral reflectance instead of raw pixel DNs so that the influence of inconsistent illumination caused by varied weather conditions at the time of data collection can be greatly reduced. Such consistent spectral reflectance data also enhance the generalization capability of the proposed model. Experiments on the high resolution imagery before and after the urban flooding events (i.e., the 2017 Hurricane Harvey and the 2018 Hurricane Florence) showed that the developed PSNet can produce urban flood maps with consistently high precision, recall, F1 score, and overall accuracy compared with baseline classification models including support vector machine, decision tree, random forest, and AdaBoost, which were often poor in either precision or recall. The study paves the way to fuse bi-temporal remote sensing images for near real-time precision damage mapping associated with other types of natural hazards (e.g., wildfires and earthquakes).« less