skip to main content


Title: Improving the local climate zone classification with building height, imperviousness, and machine learning for urban models
Abstract The Local Climate Zone (LCZ) classification is already widely used in urban heat island and other climate studies. The current classification method does not incorporate crucial urban auxiliary GIS data on building height and imperviousness that could significantly improve urban-type LCZ classification utility as well as accuracy. This study utilized a hybrid GIS- and remote sensing imagery-based framework to systematically compare and evaluate different machine and deep learning methods. The Convolution Neural Network (CNN) classifier outperforms in terms of accuracy, but it requires multi-pixel input, which reduces the output’s spatial resolution and creates a tradeoff between accuracy and spatial resolution. The Random Forest (RF) classifier performs best among the single-pixel classifiers. This study also shows that incorporating building height dataset improves the accuracy of the high- and mid-rise classes in the RF classifiers, whereas an imperviousness dataset improves the low-rise classes. The single-pass forward permutation test reveals that both auxiliary datasets dominate the classification accuracy in the RF classifier, while near-infrared and thermal infrared are the dominating features in the CNN classifier. These findings show that the conventional LCZ classification framework used in the World Urban Database and Access Portal Tools (WUDAPT) can be improved by adopting building height and imperviousness information. This framework can be easily applied to different cities to generate LCZ maps for urban models.  more » « less
Award ID(s):
1835739
NSF-PAR ID:
10391104
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Computational Urban Science
Volume:
2
Issue:
1
ISSN:
2730-6852
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings particularly in urban regions. However, building roof structure is not fully provided in maps. This paper proposes a method to automatically label building roof shape from publicly available GIS data. Satellite imagery and airborne LiDAR data are processed and manually labeled to create a diverse annotated roof image dataset for small to large urban cities. Multiple convolutional neural network (CNN) architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. Satellite image and LiDAR data fusion is shown to provide greater classification accuracy than using either data type alone. Model confidence thresholds are adjusted leading to significant increases in models precision. Networks trained from roof data in Witten, Germany and Manhattan (New York City) are evaluated on independent data from these cities and Ann Arbor, Michigan. 
    more » « less
  2. Messinger, David W. ; Velez-Reyes, Miguel (Ed.)
    Recent advances in data fusion provide the capability to obtain enhanced hyperspectral data with high spatial and spectral information content, thus allowing for an improved classification accuracy. Although hyperspectral image classification is a highly investigated topic in remote sensing, each classification technique presents different advantages and disadvantages. For example; methods based on morphological filtering are particularly good at classifying human-made structures with basic geometrical spatial shape, like houses and buildings. On the other hand, methods based on spectral information tend to perform better classification in natural scenery with more shape diversity such as vegetation and soil areas. Even more, for those classes with mixed pixels, small training data or objects with similar re ectance values present a higher challenge to obtain high classification accuracy. Therefore, it is difficult to find just one technique that provides the highest accuracy of classification for every class present in an image. This work proposes a decision fusion approach aiming to increase classification accuracy of enhanced hyperspectral images by integrating the results of multiple classifiers. Our approach is performed in two-steps: 1) the use of machine learning algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN) and Class-dependent Sparse Representation will generate initial classification data, then 2) the decision fusion scheme based on a Convolutional Neural Network (CNN) will integrate all the classification results into a unified classification rule. In particular, the CNN receives as input the different probabilities of pixel values from each implemented classifier, and using a softmax activation function, the final decision is estimated. We present results showing the performance of our method using different hyperspectral image datasets. 
    more » « less
  3. A novel hyperspectral image classification algorithm is proposed and demonstrated on benchmark hyperspectral images. We also introduce a hyperspectral sky imaging dataset that we are collecting for detecting the amount and type of cloudiness. The algorithm designed to be applied to such systems could improve the spatial and temporal resolution of cloud information vital to understanding Earth’s climate. We discuss the nature of our HSI-Cloud dataset being collected and an algorithm we propose for processing the dataset using a categorical-boosting method. The proposed method utilizes multiple clusterings to augment the dataset and achieves higher pixel classification accuracy. Creating categorical features via clustering enriches the data representation and improves boosting ensembles. For the experimental datasets used in this paper, gradient boosting methods performed favorably to the benchmark algorithms. 
    more » « less
  4. Abstract

    The vertical dimensions of urban morphology, specifically the heights of trees and buildings, exert significant influence on wind flow fields in urban street canyons and the thermal environment of the urban fabric, subsequently affecting the microclimate, noise levels, and air quality. Despite their importance, these critical attributes are less commonly available and rarely utilized in urban climate models compared to planar land use and land cover data. In this study, we explicitly mapped theheight oftreesandbuildings (HiTAB) across the city of Chicago at 1 m spatial resolution using a data fusion approach. This approach integrates high-precision light detection and ranging (LiDAR) cloud point data, building footprint inventory, and multi-band satellite images. Specifically, the digital terrain and surface models were first created from the LiDAR dataset to calculate the height of surface objects, while the rest of the datasets were used to delineate trees and buildings. We validated the derived height information against the existing building database in downtown Chicago and the Meter-scale Urban Land Cover map from the Environmental Protection Agency, respectively. The co-investigation on trees and building heights offers a valuable initiative in the effort to inform urban land surface parameterizations using real-world data. Given their high spatial resolution, the height maps can be adopted in physical-based and data-driven urban models to achieve higher resolution and accuracy while lowering uncertainties. Moreover, our method can be extended to other urban regions, benefiting from the growing availability of high-resolution urban informatics globally. Collectively, these datasets can substantially contribute to future studies on hyper-local weather dynamics, urban heterogeneity, morphology, and planning, providing a more comprehensive understanding of urban environments.

     
    more » « less
  5. Messinger, David W. ; Velez-Reyes, Miguel (Ed.)
    Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion 
    more » « less