skip to main content


Title: Automated building image extraction from 360° panoramas for postdisaster evaluation
Abstract

After a disaster, teams of structural engineers collect vast amounts of images from damaged buildings to obtain new knowledge and extract lessons from the event. However, in many cases, the images collected are captured without sufficient spatial context. When damage is severe, it may be quite difficult to even recognize the building. Accessing images of the predisaster condition of those buildings is required to accurately identify the cause of the failure or the actual loss in the building. Here, to address this issue, we develop a method to automatically extract pre‐event building images from 360° panorama images (panoramas). By providing a geotagged image collected near the target building as the input, panoramas close to the input image location are automatically downloaded through street view services (e.g., Google or Bing in the United States). By computing the geometric relationship between the panoramas and the target building, the most suitable projection direction for each panorama is identified to generate high‐quality 2D images of the building. Region‐based convolutional neural networks are exploited to recognize the building within those 2D images. Several panoramas are used so that the detected building images provide various viewpoints of the building. To demonstrate the capability of the technique, we consider residential buildings in Holiday Beach in Rockport, Texas, United States, that experienced significant devastation in Hurricane Harvey in 2017. Using geotagged images gathered during actual postdisaster building reconnaissance missions, we verify the method by successfully extracting residential building images from Google Street View images, which were captured before the event.

 
more » « less
NSF-PAR ID:
10115982
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer-Aided Civil and Infrastructure Engineering
Volume:
35
Issue:
3
ISSN:
1093-9687
Page Range / eLocation ID:
p. 241-257
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Street view imagery databases such as Google Street View, Mapillary, and Karta View provide great spatial and temporal coverage for many cities globally. Those data, when coupled with appropriate computer vision algorithms, can provide an effective means to analyse aspects of the urban environment at scale. As an effort to enhance current practices in urban flood risk assessment, this project investigates a potential use of street view imagery data to identify building features that indicate buildings’ vulnerability to flooding (e.g., basements and semi-basements). In particular, this paper discusses (1) building features indicating the presence of basement structures, (2) available imagery data sources capturing those features, and (3) computer vision algorithms capable of automatically detecting the features of interest. The paper also reviews existing methods for reconstructing geometry representations of the extracted features from images and potential approaches to account for data quality issues. Preliminary experiments were conducted, which confirmed the usability of the freely available Mapillary images for detecting basement railings as an example type of basement features, as well as geolocating the features.

     
    more » « less
  2. We investigate how real-time, 360 degree view synthesis can be achieved on current virtual reality hardware from a single panoramic image input. We introduce a light-weight method to automatically convert a single panoramic input into a multi-cylinder image representation that supports real-time, free-viewpoint view synthesis rendering for virtual reality. We apply an existing convolutional neural network trained on pinhole images to a cylindrical panorama with wrap padding to ensure agreement between the left and right edges. The network outputs a stack of semi-transparent panoramas at varying depths which can be easily rendered and composited with over blending. Quantitative experiments and a user study show that the method produces convincing parallax and fewer artifacts than a textured mesh representation. 
    more » « less
  3. We introduce a method to automatically convert a single panoramic input into a multi-cylinder image representation that supports real-time, free-viewpoint view synthesis for virtual reality. We apply an existing convolutional neural network trained on pinhole images to a cylindrical panorama with wrap padding to ensure agreement between the left and right edges. The network outputs a stack of semi-transparent panoramas at varying depths which can be easily rendered and composited with over blending. Initial experiments show that the method produces convincing parallax and cleaner object boundaries than a textured mesh representation. 
    more » « less
  4. Among various elements of urban infrastructure, there is significant opportunity to improve existing buildings’ sustainability, considering that approximately 40% of the total primary energy consumption and 72% of electricity consumption in United States is consumed by the building sector. Many different efforts focus on reducing the energy consumption of residential buildings. Data-validated building energy modeling methods serve the role of supporting this effort, by enabling the identification of the potential savings associated with different potential retrofit strategies. However there are many uncertainties that can impact the accuracy of energy model results, one of which is the weather input data. Measured weather data inputs located at each building can help address this concern, however, weather station data collection for each building is also costly and typically not feasible. Some weather station data is already collected, however, these are generally located at airports rather than near buildings, and thus do not capture local, spatially-varying weather conditions which are documented to occur, particularly in urban areas. In this study we address the impact of spatial temperature differences on residential building energy use. An energy model was developed in EnergyPlus for a residential building located in Mueller neighborhood of Austin, TX, and was validated using actual hourly measured electricity consumption. Using the validated model, the impact of measured spatial temperature differences on building energy consumption were investigated using multiple weather stations located throughout the urban area with different urban fractions. The results indicate that energy consumption of a residential building in a city with a 10% higher urban fraction would increase by approximately 10%. This variation in energy consumption is likely due to the impact of UHI effects occurring in urban areas with high densities. 
    more » « less
  5. Image data plays a pivotal role in the current data-driven era, particularly in applications such as computer vision, object recognition, and facial identification. Google Maps ® stands out as a widely used platform that heavily relies on street view images. To fulfill the pressing need for an effective and distributed mechanism for image data collection, we present a framework that utilizes smart contract technology and open-source robots to gather street-view image sequences. The proposed framework also includes a protocol for maintaining these sequences using a private blockchain capable of retaining different versions of street views while ensuring the integrity of collected data. With this framework, Google Maps ® data can be securely collected, stored, and published on a private blockchain. By conducting tests with actual robots, we demonstrate the feasibility of the framework and its capability to seamlessly upload privately maintained blockchain image sequences to Google Maps ® using the Google Street View ® Publish API. 
    more » « less