skip to main content

Title: How to Measure Distance on a Digital Terrain Surface and Why it Matters in Geographical Analysis
Distance is the most fundamental metric in spatial analysis and modeling. Planar distance and geodesic distance are the common distance measurements in current geographic information systems and geospatial analytic tools. However, there is little understanding about how to measure distance in a digital terrain surface and the uncertainty of the measurement. To fill this gap, this study applies a Monte‐Carlo simulation to evaluate seven surface‐adjustment methods for distance measurement in digital terrain model. Using parallel computing techniques and a memory optimization method, the processing time for the distances calculation of 6,000 simulated transects has been reduced to a manageable level. The accuracy and computational efficiency of the surface‐adjustment methods were systematically compared in six study areas with various terrain types and in digital elevation models in different resolutions. Major findings of this study indicate a trade‐off between measurement accuracy and computational efficiency: calculations at finer resolution DEMs improve measurement accuracy but increase processing times. Among the methods compared, the weighted average demonstrates highest accuracy and second fastest processing time. Additionally, the choice of surface adjustment method has a greater impact on the accuracy of distance measurements in rougher terrain.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Geographical Analysis
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The topic of this paper is the airborne evaluation of ICESat-2 Advanced Topographic Laser Altimeter System (ATLAS) measurement capabilities and surface-height-determination over crevassed glacial terrain, with a focus on the geodetical accuracy of geophysical data collected from a helicopter. To obtain surface heights over crevassed and otherwise complex ice surface, ICESat-2 data are analyzed using the density-dimension algorithm for ice surfaces (DDA-ice), which yields surface heights at the nominal 0.7 m along-track spacing of ATLAS data. As the result of an ongoing surge, Negribreen, Svalbard, provided an ideal situation for the validation objectives in 2018 and 2019, because many different crevasse types and morphologically complex ice surfaces existed in close proximity. Airborne geophysical data, including laser altimeter data (profilometer data at 905 nm frequency), differential Global Positioning System (GPS), Inertial Measurement Unit (IMU) data, on-board-time-lapse imagery and photographs, were collected during two campaigns in summers of 2018 and 2019. Airborne experiment setup, geodetical correction and data processing steps are described here. To date, there is relatively little knowledge of the geodetical accuracy that can be obtained from kinematic data collection from a helicopter. Our study finds that (1) Kinematic GPS data collection with correction in post-processing yields higher accuracies than Real-Time-Kinematic (RTK) data collection. (2) Processing of only the rover data using the Natural Resources Canada Spatial Reference System Precise Point Positioning (CSRS-PPP) software is sufficiently accurate for the sub-satellite validation purpose. (3) Distances between ICESat-2 ground tracks and airborne ground tracks were generally better than 25 m, while distance between predicted and actual ICESat-2 ground track was on the order of 9 m, which allows direct comparison of ice-surface heights and spatial statistical characteristics of crevasses from the satellite and airborne measurements. (4) The Lasertech Universal Laser System (ULS), operated at up to 300 m above ground level, yields full return frequency (400 Hz) and 0.06–0.08 m on-ice along-track spacing of height measurements. (5) Cross-over differences of airborne laser altimeter data are −0.172 ± 2.564 m along straight paths, which implies a precision of approximately 2.6 m for ICESat-2 validation experiments in crevassed terrain. (6) In summary, the comparatively light-weight experiment setup of a suite of small survey equipment mounted on a Eurocopter (Helicopter AS-350) and kinematic GPS data analyzed in post-processing using CSRS-PPP leads to high accuracy repeats of the ICESat-2 tracks. The technical results (1)–(6) indicate that direct comparison of ice-surface heights and crevasse depths from the ICESat-2 and airborne laser altimeter data is warranted. Numerical evaluation of height comparisons utilizes spatial surface roughness measures. The final result of the validation is that ICESat-2 ATLAS data, analyzed with the DDA-ice, facilitate surface-height determination over crevassed terrain, in good agreement with airborne data, including spatial characteristics, such as surface roughness, crevasse spacing and depth, which are key informants on the deformation and dynamics of a glacier during surge. 
    more » « less
  2. Fromme, Paul ; Su, Zhongqing (Ed.)
    Stereovision systems can extract full-field three-dimensional (3D) displacements of structures by processing the images collected with two synchronized cameras. To obtain accurate measurements, the cameras must be calibrated to account for lens distortion (i.e., intrinsic parameters) and compute the cameras’ relative position and orientation (i.e., extrinsic parameters). Traditionally, calibration is performed by taking photos of a calibration object (e.g., a checkerboard) with the two cameras. Because the calibration object must be similar in size to the targeted structure, measurements on large-scale structures are highly impractical. This research proposes a multi-sensor board with three inertial measurement units and a laser distance meter to compute the extrinsic parameters of a stereovision system and streamline the calibration procedure. In this paper, the performances of the proposed sensor-based calibration are compared with the accuracy of the traditional image-based calibration procedure. Laboratory experiments show that cameras calibrated with the multi-sensor board measure displacements with 95% accuracy compared to displacements obtained from cameras calibrated with the traditional procedure. The results of this study indicate that the sensor-based approach can increase the applicability of 3D digital image correlation measurements to large-scale structures while reducing the time and complexity of the calibration. 
    more » « less
  3. Parallel-laser photogrammetry is growing in popularity as a way to collect non-invasive body size data from wild mammals. Despite its many appeals, this method requires researchers to hand-measure (i) the pixel distance between the parallel laser spots (inter-laser distance) to produce a scale within the image, and (ii) the pixel distance between the study subject’s body landmarks (inter-landmark distance). This manual effort is time-consuming and introduces human error: a researcher measuring the same image twice will rarely return the same values both times (resulting in within-observer error), as is also the case when two researchers measure the same image (resulting in between-observer error). Here, we present two independent methods that automate the inter-laser distance measurement of parallel-laser photogrammetry images. One method uses machine learning and image processing techniques in Python, and the other uses image processing techniques in ImageJ. Both of these methods reduce labor and increase precision without sacrificing accuracy. We first introduce the workflow of the two methods. Then, using two parallel-laser datasets of wild mountain gorilla and wild savannah baboon images, we validate the precision of these two automated methods relative to manual measurements and to each other. We also estimate the reduction of variation in final body size estimates in centimeters when adopting these automated methods, as these methods have no human error. Finally, we highlight the strengths of each method, suggest best practices for adopting either of them, and propose future directions for the automation of parallel-laser photogrammetry data. 
    more » « less
  4. Spatial interpolation techniques play an important role in hydrology, as many point observations need to be interpolated to create continuous surfaces. Despite the availability of several tools and methods for interpolating data, not all of them work consistently for hydrologic applications. One of the techniques, the Laplace Equation, which is used in hydrology for creating flownets, has rarely been used for data interpolation. The objective of this study is to examine the efficiency of Laplace formulation (LF) in interpolating data used in hydrologic applications (hydrologic data) and compare it with other widely used methods such as inverse distance weighting (IDW), natural neighbor, and ordinary kriging. The performance of LF interpolation with other methods is evaluated using quantitative measures, including root mean squared error (RMSE) and coefficient of determination (R2) for accuracy, visual assessment for surface quality, and computational cost for operational efficiency and speed. Data related to surface elevation, river bathymetry, precipitation, temperature, and soil moisture are used for different areas in the United States. RMSE and R2 results show that LF is comparable to other methods for accuracy. LF is easy to use as it requires fewer input parameters compared to inverse distance weighting (IDW) and Kriging. Computationally, LF is faster than other methods in terms of speed when the datasets are not large. Overall, LF offers a robust alternative to existing methods for interpolating various hydrologic data. Further work is required to improve its computational efficiency. 
    more » « less
  5. Abstract

    Decision trees are a widely used method for classification, both alone and as the building blocks of multiple different ensemble learning methods. The Max Cut decision tree introduced here involves novel modifications to a standard, baseline variant of a classification decision tree, CART Gini. One modification involves an alternative splitting metric, Max Cut, based on maximizing the distance between all pairs of observations that belong to separate classes and separate sides of the threshold value. The other modification, Node Means PCA, selects the decision feature from a linear combination of the input features constructed using an adjustment to principal component analysis (PCA) locally at each node. Our experiments show that this node-based, localized PCA with the Max Cut splitting metric can dramatically improve classification accuracy while also significantly decreasing computational time compared to the CART Gini decision tree. These improvements are most significant for higher-dimensional datasets. For the example dataset CIFAR-100, the modifications enabled a 49% improvement in accuracy, relative to CART Gini, while providing a$$6.8 \times$$6.8×speed up compared to the Scikit-Learn implementation of CART Gini. These introduced modifications are expected to dramatically advance the capabilities of decision trees for difficult classification tasks.

    more » « less