skip to main content


Search for: All records

Creators/Authors contains: "Laefer, D F"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract. Most deep learning (DL) methods that are not end-to-end use several multi-scale and multi-type hand-crafted features that make the network challenging, more computationally intensive and vulnerable to overfitting. Furthermore, reliance on empirically-based feature dimensionality reduction may lead to misclassification. In contrast, efficient feature management can reduce storage and computational complexities, builds better classifiers, and improves overall performance. Principal Component Analysis (PCA) is a well-known dimension reduction technique that has been used for feature extraction. This paper presents a two-step PCA based feature extraction algorithm that employs a variant of feature-based PointNet (Qi et al., 2017a) for point cloud classification. This paper extends the PointNet framework for use on large-scale aerial LiDAR data, and contributes by (i) developing a new feature extraction algorithm, (ii) exploring the impact of dimensionality reduction in feature extraction, and (iii) introducing a non-end-to-end PointNet variant for per point classification in point clouds. This is demonstrated on aerial laser scanning (ALS) point clouds. The algorithm successfully reduces the dimension of the feature space without sacrificing performance, as benchmarked against the original PointNet algorithm. When tested on the well-known Vaihingen data set, the proposed algorithm achieves an Overall Accuracy (OA) of 74.64% by using 9 input vectors and 14 shape features, whereas with the same 9 input vectors and only 5PCs (principal components built by the 14 shape features) it actually achieves a higher OA of 75.36% which demonstrates the effect of efficient dimensionality reduction.

     
    more » « less
  2. NA (Ed.)

    Abstract. This study investigates the inability of two popular data splitting techniques: train/test split and k-fold cross-validation that are to create training and validation data sets, and to achieve sufficient generality for supervised deep learning (DL) methods. This failure is mainly caused by their limited ability of new data creation. In response, the bootstrap is a computer based statistical resampling method that has been used efficiently for estimating the distribution of a sample estimator and to assess a model without having knowledge about the population. This paper couples cross-validation and bootstrap to have their respective advantages in view of data generation strategy and to achieve better generalization of a DL model. This paper contributes by: (i) developing an algorithm for better selection of training and validation data sets, (ii) exploring the potential of bootstrap for drawing statistical inference on the necessary performance metrics (e.g., mean square error), and (iii) introducing a method that can assess and improve the efficiency of a DL model. The proposed method is applied for semantic segmentation and is demonstrated via a DL based classification algorithm, PointNet, through aerial laser scanning point cloud data.

     
    more » « less
  3. Abstract. While data on human behavior in COVID-19 rich environments have been captured and publicly released, spatial components of such data are recorded in two-dimensions. Thus, the complete roles of the built and natural environment cannot be readily ascertained. This paper introduces a mechanism for the three-dimensional (3D) visualization of egress behaviors of individuals leaving a COVID-19 exposed healthcare facility in Spring 2020 in New York City. Behavioral data were extracted and projected onto a 3D aerial laser scanning point cloud of the surrounding area rendered with Potree, a readily available open-source Web Graphics Library (WebGL) point cloud viewer. The outcomes were 3D heatmap visualizations of the built environment that indicated the event locations of individuals exhibiting specific characteristics (e.g., men vs. women; public transit users vs. private vehicle users). These visualizations enabled interactive navigation through the space accessible through any modern web browser supporting WebGL. Visualizing egress behavior in this manner may highlight patterns indicative of correlations between the environment, human behavior, and transmissible diseases. Findings using such tools have the potential to identify high-exposure areas and surfaces such as doors, railings, and other physical features. Providing flexible visualization capabilities with 3D spatial context can enable analysts to quickly advise and communicate vital information across a broad range of use cases. This paper presents such an application to extract the public health information necessary to form localized responses to reduce COVID-19 infection and transmission rates in urban areas. 
    more » « less
  4. null (Ed.)
    Abstract. Each year, lives are needlessly lost to floods due to residents failing to heed evacuation advisories. Risk communication research suggests that flood warnings need to be more vivid, contextualized, and visualizable, in order to engage the message recipient. This paper makes the case for the development of a low-cost augmented reality tool that enables individuals to visualize, at close range and in three-dimension, their homes, schools, and places of work and worship subjected to flooding (modeled upon a series of federally expected flood hazard levels). This paper also introduces initial tool development in this area and the related data input stream. 
    more » « less
  5. null (Ed.)
    Abstract. The massive amounts of spatio-temporal information often present in LiDAR data sets make their storage, processing, and visualisation computationally demanding. There is an increasing need for systems and tools that support all the spatial and temporal components and the three-dimensional nature of these datasets for effortless retrieval and visualisation. In response to these needs, this paper presents a scalable, distributed database system that is designed explicitly for retrieving and viewing large LiDAR datasets on the web. The ultimate goal of the system is to provide rapid and convenient access to a large repository of LiDAR data hosted in a distributed computing platform. The system is composed of multiple, share-nothing nodes operating in parallel. Namely, each node is autonomous and has a dedicated set of processors and memory. The nodes communicate with each other via an interconnected network. The data management system presented in this paper is implemented based on Apache HBase, a distributed key-value datastore within the Hadoop eco-system. HBase is extended with new data encoding and indexing mechanisms to accommodate both the point cloud and the full waveform components of LiDAR data. The data can be consumed by any desktop or web application that communicates with the data repository using the HTTP protocol. The communication is enabled by a web servlet. In addition to the command line tool used for administration tasks, two web applications are presented to illustrate the types of user-facing applications that can be coupled with the data system. 
    more » « less