On February 6, 2023, a major earthquake of 7.8 magnitude and its aftershocks caused widespread destruction in Turkey and Syria, causing more than 55,000 deaths, displacing 3 million people in Turkey and 2.9 million in Syria, and destroying or damaging at least 230,000 buildings. Our research presents detailed city-scale maps of landslides, liquefaction, and building damage from this earthquake, utilizing a novel variational causal Bayesian network. This network integrates InSAR-derived change detection with new empirical ground failure models and building footprints, enabling us to (1) rapidly estimate large-scale building damage, landslides, and liquefaction from remote sensing data, (2) jointly attribute building damage to landslides, liquefaction, and shaking, (3) improve regional landslide and liquefaction predictions impacting infrastructure, and (4) simultaneously identify damage degrees in thousands of buildings. For city-scale, building-by-building damage assessments, we use building footprints and satellite imagery with a spatial resolution of approximately 30 meters. This allows us to achieve a high resolution in damage assessment, both in timeliness and scale, enabling damage classification at the individual building level within days of the earthquake. Our findings detail the extent of building damage, including collapses, in Hatay, Osmaniye, Adıyaman, Gaziantep, and Kahramanmaras. We classified building damages into five categories: no damage, slight, moderate, partial collapse, and collapse. We evaluated damage estimates against preliminary ground-truth data reported by the civil authorities. Our results demonstrate the accuracy of our classification system, as evidenced by the area under the curve (AUC) scores on the receiver operating characteristic (ROC) curve, which ranged from 0.9588 to 0.9931 across different damage categories and regions. Specifically, our model achieved an AUC of 0.9931 for collapsed buildings in the Hatay/Osmaniye area, indicating a 99.31% probability that the model will rank a randomly chosen collapsed building higher than a randomly chosen non-collapsed building. These accurate, building-specific damage estimates, with greater than 95% classification accuracy across all categories, are crucial for disaster response and can aid agencies in effectively allocating resources and coordinating efforts during disaster recovery.
more »
« less
Automated building damage assessment and large‐scale mapping by integrating satellite imagery, GIS, and deep learning
Abstract Efficient and accurate building damage assessment is crucial for effective emergency response and resource allocation following natural hazards. However, traditional methods are often time consuming and labor intensive. Recent advancements in remote sensing and artificial intelligence (AI) have made it possible to automate the damage assessment process, and previous studies have made notable progress in machine learning classification. However, the application in postdisaster emergency response requires an end‐to‐end model that starts with satellite imagery as input and automates the generation of large‐scale damage maps as output, which was rarely the focus of previous studies. Addressing this gap, this study integrates satellite imagery, Geographic Information Systems (GIS), and deep learning. This enables the creation of comprehensive, large‐scale building damage assessment maps, providing valuable insights into the extent and spatial variation of damage. The effectiveness of this methodology is demonstrated in Galveston County following Hurricane Ike, where the classification of a large ensemble of buildings was automated using deep learning models trained on the xBD data set. The results showed that utilizing GIS can automate the extraction of subimages with high accuracy, while fine‐tuning can enhance the robustness of the damage classification to generate highly accurate large‐scale damage maps. Those damage maps were validated against historical reports.
more »
« less
- Award ID(s):
- 2052930
- PAR ID:
- 10520016
- Publisher / Repository:
- Wiley
- Date Published:
- Journal Name:
- Computer-Aided Civil and Infrastructure Engineering
- ISSN:
- 1093-9687
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Geographic information systems (GIS) provide accurate maps of terrain, roads, waterways, and building footprints and heights. Aircraft, particularly small unmanned aircraft systems (UAS), can exploit this and additional information such as building roof structure to improve navigation accuracy and safely perform contingency landings particularly in urban regions. However, building roof structure is not fully provided in maps. This paper proposes a method to automatically label building roof shape from publicly available GIS data. Satellite imagery and airborne LiDAR data are processed and manually labeled to create a diverse annotated roof image dataset for small to large urban cities. Multiple convolutional neural network (CNN) architectures are trained and tested, with the best performing networks providing a condensed feature set for support vector machine and decision tree classifiers. Satellite image and LiDAR data fusion is shown to provide greater classification accuracy than using either data type alone. Model confidence thresholds are adjusted leading to significant increases in models precision. Networks trained from roof data in Witten, Germany and Manhattan (New York City) are evaluated on independent data from these cities and Ann Arbor, Michigan.more » « less
-
Inundation mapping is a critical task for damage assessment, emergency management, and prioritizing relief efforts during a flooding event. Remote sensing has been an effective tool for interpreting and analyzing water bodies and detecting floods over the past decades. In recent years, deep learning algorithms such as convolutional neural networks (CNNs) have demonstrated promising performance for remote sensing image classification for many applications, including inundation mapping. Unlike conventional algorithms, deep learning can learn features automatically from large datasets. This research aims to compare and investigate the performance of two state-of-the-art methods for 3D inundation mapping: a deep learning-based image analysis and a Geomorphic Flood Index (GFI). The first method, deep learning image analysis involves three steps: 1) image classification to delineate flood boundaries, 2) integrate the flood boundaries and topography data to create a three-dimensional (3D) water surface, and 3) compare the 3D water surface with pre-flood topography to estimate floodwater depth. The second method, i.e., GFI, involves three phases: 1) calculate a river basin morphological information, such as river height (hr) and elevation difference (H), 2) calibrate and measure GFI to delineate flood boundaries, and 3) calculate the coefficient parameter ( α ), and correct the value of hr to estimate inundation depth. The methods were implemented to generate 3D inundation maps over Princeville, North Carolina, United States during hurricane Matthew in 2016. The deep learning method demonstrated better performance with a root mean square error (RMSE) of 0.26 m for water depth. It also achieved about 98% in delineating the flood boundaries using UAV imagery. This approach is efficient in extracting and creating a 3D flood extent map at a different scale to support emergency response and recovery activities during a flood event.more » « less
-
Coastal wetlands, especially tidal marshes, play a crucial role in supporting ecosystems and slowing shoreline erosion. Accurate and cost-effective identification and classification of various marshtypes, such as high and low marshes, are important for effective coastal management and conservation endeavors. However, mapping tidal marshes is challenging due to heterogeneous coastal vegetation and dynamic tidal influences. In this study, we employ a deep learning segmentation model to automate the identification and classification of tidal marsh communities in coastal Virginia, USA, using seasonal, publicly available satellite and aerial images. This study leverages the combined capabilities of Sentinel-2 and National Agriculture Imagery Program (NAIP)imagery and a UNet architecture to accurately classify tidal marsh communities. We illustrate that by leveraging features learned from data abundant regions and small quantities of high-quality training data collected from the target region, an accuracy as high as 88% can be achieved in the classification of marsh types, specifically high marsh and low marsh, at a spatial resolution of 0.6 m.This study contributes to the field of marsh mapping by highlighting the potential of combining multispectral satellite imagery and deep learning for accurate and efficient marsh type classificationmore » « less
-
Building an efficient and accurate pixel-level labeling framework for large-scale and high-resolution satellite imagery is an important machine learning application in the remote sensing area. Due to the very limited amount of the ground-truth data, we employ a well-performing superpixel tessellation approach to segment the image into homogeneous regions and then use these irregular-shaped regions as the foundation for the dense labeling work. A deep model based on generative adversarial networks is trained to learn the discriminating features from the image data without requiring any additional labeled information. In the subsequent classification step, we adopt the discriminator of this unsupervised model as a feature extractor and train a fast and robust support vector machine to assign the pixel-level labels. In the experiments, we evaluate our framework in terms of the pixel-level classification accuracy on satellite imagery with different geographical types. The results show that our dense-labeling framework is very competitive compared to the state-of-the-art methods that heavily rely on prior knowledge or other large-scale annotated datasets.more » « less
An official website of the United States government

