skip to main content


Title: UAV-based Geotechnical Modeling and Mapping of an Inaccessible Underground Site
Photogrammetry is becoming a more common method for mapping geological and structural features in underground mines. The issue of capturing geological and structural data in inaccessible areas of mines, such as those that are unsupported, remains even when utilizing photogrammetric methods; thus, geological models of mines are left with incomplete datasets. The implementation of Unmanned Aerial Vehicles (UAVs) underground has allowed for experimentation with photogrammetry conducted from a UAV platform. This paper contains the results of an investigation focused on collecting UAV-based imagery at underground locations within Barrick Gold Corporation’s Golden Sunlight Mine in Whitehall, Montana, and the use of the imagery to produce 3D models for mapping geologic features. The primary components of the study described are the underground imagery acquisition experiences and a comparison of underground photogrammetry modeling with UAV imagery using two sets of software: a) ADAM Technology’s 3DM CalibCam and 3DM Analyst and b) Bentley’s ContextCapture for 3D modeling combined with Split Engineering’s Split-FX for mapping. The lessons learned during this study may help guide future efforts using UAVs for capturing geologic data, as well as to help monitor stability in areas that are inaccessible.  more » « less
Award ID(s):
1742880
NSF-PAR ID:
10066210
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
UAV-based Geotechnical Modeling and Mapping of an Inaccessible Underground Site
Page Range / eLocation ID:
Paper 18-516
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Geotechnical characterization of rock masses in underground mines often involves physical measurements in supported excavations. However, unsupported stopes and drifts prevent safe access for mapping by geotechnical personnel. The advent of inexpensive, open platform unmanned aerial vehicles (UAVs) allows geotechnical personnel to characterize hazardous rock masses by utilizing traditional photogrammetric and FLIR (forward looking infrared) imagery techniques. The photogrammetric imagery can be used to capture geological structural data from the rock mass for kinematic and numerical analyses, as well as for generating geological models. In particular, the FLIR imagery has the potential to assist in identifying areas of loose rock, which typically goes unnoticed until it becomes a hazard. This paper summarizes the results of a study involving UAV flights underground at the Barrick Golden Sunlight Mine, the generation of 3D models from UAV-captured imagery, and the identification of geological data from photogrammetry models. Results confirm that the combination of off-the-shelf technologies used in this study can be successfully employed as a geotechnical tool in the underground mining environment. 
    more » « less
  2. Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively. 
    more » « less
  3. Current forest monitoring technologies including satellite remote sensing, manned/piloted aircraft, and observation towers leave uncertainties about a wildfire’s extent, behavior, and conditions in the fire’s near environment, particularly during its early growth. Rapid mapping and real-time fire monitoring can inform in-time intervention or management solutions to maximize beneficial fire outcomes. Drone systems’ unique features of 3D mobility, low flight altitude, and fast and easy deployment make them a valuable tool for early detection and assessment of wildland fires, especially in remote forests that are not easily accessible by ground vehicles. In addition, the lack of abundant, well-annotated aerial datasets – in part due to unmanned aerial vehicles’ (UAVs’) flight restrictions during prescribed burns and wildfires – has limited research advances in reliable data-driven fire detection and modeling techniques. While existing wildland fire datasets often include either color or thermal fire images, here we present (1) a multi-modal UAV-collected dataset of dual-feed side-by-side videos including both RGB and thermal images of a prescribed fire in an open canopy pine forest in Northern Arizona and (2) a deep learning-based methodology for detecting fire and smoke pixels at accuracy much higher than the usual single-channel video feeds. The collected images are labeled to “fire” or “no-fire” frames by two human experts using side-by-side RGB and thermal images to determine the label. To provide context to the main dataset’s aerial imagery, the included supplementary dataset provides a georeferenced pre-burn point cloud, an RGB orthomosaic, weather information, a burn plan, and other burn information. By using and expanding on this guide dataset, research can develop new data-driven fire detection, fire segmentation, and fire modeling techniques. 
    more » « less
  4. Unmanned aerial vehicles (UAVs) equipped with multispectral sensors offer high spatial and temporal resolution imagery for monitoring crop stress at early stages of development. Analysis of UAV-derived data with advanced machine learning models could improve real-time management in agricultural systems, but guidance for this integration is currently limited. Here we compare two deep learning-based strategies for early warning detection of crop stress, using multitemporal imagery throughout the growing season to predict field-scale yield in irrigated rice in eastern Arkansas. Both deep learning strategies showed improvements upon traditional statistical learning approaches including linear regression and gradient boosted decision trees. First, we explicitly accounted for variation across developmental stages using a 3D convolutional neural network (CNN) architecture that captures both spatial and temporal dimensions of UAV images from multiple time points throughout one growing season. 3D-CNNs achieved low prediction error on the test set, with a Root Mean Squared Error (RMSE) of 8.8% of the mean yield. For the second strategy, a 2D-CNN, we considered only spatial relationships among pixels for image features acquired during a single flyover. 2D-CNNs trained on images from a single day were most accurate when images were taken during booting stage or later, with RMSE ranging from 7.4 to 8.2% of the mean yield. A primary benefit of convolutional autoencoder-like models (based on analyses of prediction maps and feature importance) is the spatial denoising effect that corrects yield predictions for individual pixels based on the values of vegetation index and thermal features for nearby pixels. Our results highlight the promise of convolutional autoencoders for UAV-based yield prediction in rice. 
    more » « less
  5. Arctic vegetation communities are rapidly changing with climate warming, which impacts wildlife, carbon cycling and climate feedbacks. Accurately monitoring vegetation change is thus crucial, but scale mismatches between field and satellite-based monitoring cause challenges. Remote sensing from unmanned aerial vehicles (UAVs) has emerged as a bridge between field data and satellite-based mapping. We assess the viability of using high resolution UAV imagery and UAV-derived Structure from Motion (SfM) to predict cover, height and aboveground biomass (henceforth biomass) of Arctic plant functional types (PFTs) across a range of vegetation community types. We classified imagery by PFT, estimated cover and height, and modeled biomass from UAV-derived volume estimates. Predicted values were compared to field estimates to assess results. Cover was estimated with root-mean-square error (RMSE) 6.29-14.2% and height was estimated with RMSE 3.29-10.5 cm, depending on the PFT. Total aboveground biomass was predicted with RMSE 220.5 g m-2, and per-PFT RMSE ranged from 17.14-164.3 g m-2. Deciduous and evergreen shrub biomass was predicted most accurately, followed by lichen, graminoid, and forb biomass. Our results demonstrate the effectiveness of using UAVs to map PFT biomass, which provides a link towards improved mapping of PFTs across large areas using earth observation satellite imagery. 
    more » « less