skip to main content


Search for: All records

Award ID contains: 1635378

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available October 1, 2024
  2. Nowadays, to assess and document construction and building performance, large amount of visual data are captured and stored through camera equipped platforms such as wearable cameras, unmanned aerial/ground vehicles, and smart phones. However, due to the nonstop fashion in recording such visual data, not all of the frames in captured consecutive footages are intentionally taken, and thus not every frame is worthy of being processed for construction and building performance analysis. Since many frames will simply have non-construction related contents, before processing the visual data, the content of each recorded frame should be manually investigated depending on the association with the goal of the visual assessment. To address such challenges, this paper aims to automatically filter construction big visual data that requires no human annotations. To overcome challenges in pure discriminative approach using manually labeled images, we construct a generative model with unlabeled visual dataset, and use it to find construction-related frames in big visual dataset from jobsites. First, through composition-based snap point detection together with domain adaptation, we filter and remove most of accidently recorded frames in the footage. Then, we create discriminative classifier trained with visual data from jobsites to eliminate non-construction related images. To evaluate the reliability of the proposed method, we have obtained the ground truth based on human judgment for each photo in our testing dataset. Despite learning without any explicit labels, the proposed method shows a reasonable practical range of accuracy, which generally outperforms prior snap point detection. Through the case studies, the fidelity of the algorithm is discussed in detail. By being able to focus on selective visual data, practitioners will spend less time on browsing large amounts of visual data; rather spend more time on looking at how to leverage the visual data to facilitate decision-makings in built environments. 
    more » « less
  3. Unstructured construction sites including incomplete structures and unsecured resources (e.g., materials, equipment, and temporary facilities) are among the most vulnerable environments to windstorms such as hurricanes. Wind-induced cascading damages cause substantial losses, disruption, and considerable schedule delays in construction projects. Moreover, this would negatively affect neighboring buildings and interdependent infrastructures (e.g., electric power transmission or transportation systems), which triggers serious economic losses in our community. Nonetheless, prior works on disaster management mainly focused on post-disaster assessment and reconstruction process of built environments, and thus predicting potential risks associated with expected disasters for proactive preparedness remain largely unknown. This paper presents a new Imaging-to-Simulation framework that can uncover potential risks of wind-induced cascading damages to construction projects and their negative impacts on neighboring communities. The outcomes are expected to benefit our society as it will enhance current windstorm preparedness and mitigation plans, which ultimately promote public safety, property loss reduction, insurance cost reduction, and raise awareness of ‘Culture of Preparedness’ for disasters. 
    more » « less