This content will become publicly available on August 1, 2023
- Award ID(s):
- 2120155
- Publication Date:
- NSF-PAR ID:
- 10378913
- Journal Name:
- Journal of Seismology
- Volume:
- 26
- Issue:
- 4
- Page Range or eLocation-ID:
- 731 to 756
- ISSN:
- 1383-4649
- Sponsoring Org:
- National Science Foundation
More Like this
-
Heat loss quantification (HLQ) is an essential step in improving a building’s thermal performance and optimizing its energy usage. While this problem is well-studied in the literature, most of the existing studies are either qualitative or minimally driven quantitative studies that rely on localized building envelope points and are, thus, not suitable for automated solutions in energy audit applications. This research work is an attempt to fill this gap of knowledge by utilizing intensive thermal data (on the order of 100,000 plus images) and constitutes a relatively new area of analysis in energy audit applications. Specifically, we demonstrate a novel process using deep-learning methods to segment more than 100,000 thermal images collected from an unmanned aerial system (UAS). To quantify the heat loss for a building envelope, multiple stages of computations need to be performed: object detection (using Mask-RCNN/Faster R-CNN), estimating the surface temperature (using two clustering methods), and finally calculating the overall heat transfer coefficient (e.g., the U-value). The proposed model was applied to eleven academic campuses across the state of North Dakota. The preliminary findings indicate that Mask R-CNN outperformed other instance segmentation models with an mIOU of 73% for facades, 55% for windows, 67% for roofs, 24%more »
-
Dynamic site characterization was performed at 25 sites located on the western portion of the Mexico City Basin that were severely damaged during the Mw7.1 2017 Puebla–Morelos, Mexico, earthquake. Testing was conducted using active and passive seismic surface wave methods and the microtremor horizontal-to-vertical spectral ratio method to determine site periods and develop one-dimensional (1D) shear wave velocity ( Vs) profiles for the first 60 m of the subsoil. The measured site periods were compared to site period maps developed in 2004 and 2020 along with values computed using the Design Seismic Actions System (SASID) software following the 2020 version of the Complementary Technical Norms for Seismic Design (NTC-DS). The most noticeable biases in the predictions from the 2004 site period map were observed between the boundary of Zone II and Zone IIIa, at which site periods are overestimated. These estimates were improved upon in the 2020 site period map and showed a close similarity with SASID computed site period values. The Vs, depth, and thickness of the lacustrine clay layer were also determined to be quite variable within the basin. The softest sites are located between the lakebeds with a Vs between 45 and 57 m/s. Sites located toward the outermore »
-
SUMMARY Interfaces are important part of Earth’s layering structure. Here, we developed a new model parametrization and iterative linearized inversion method that determines 1-D crustal velocity structure using surface wave dispersion, teleseismic P-wave receiver functions and Ps and PmP traveltimes. Unlike previous joint inversion methods, the new model parametrization includes interface depths and layer Vp/Vs ratios so that smoothness constraint can be conveniently applied to velocities of individual layers without affecting the velocity discontinuity across the interfaces. It also allows adding interface-related observation such as traveltimes of Ps and PmP in the joint inversion to eliminate the trade-off between interface depth and Vp/Vs ratio and therefore to reduce the uncertainties of results. Numerical tests show that the method is computationally efficient and the inversion results are robust and independent of the initial model. Application of the method to a dense linear array across the Wabash Valley Seismic Zone (WVSZ) produced a high-resolution crustal image in this seismically active region. The results show a 51–55-km-thick crust with a mid-crustal interface at 14–17 km. The crustal Vp/Vs ratio varies from 1.69 to 1.90. There are three pillow-like, ∼100 km apart high-velocity bodies sitting at the base of the crust and directly above each ofmore »
-
Obeid, I. (Ed.)The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do notmore »
-
The first major goal of this project is to build a state-of-the-art information storage, retrieval, and analysis system that utilizes the latest technology and industry methods. This system is leveraged to accomplish another major goal, supporting modern search and browse capabilities for a large collection of tweets from the Twitter social media platform, web pages, and electronic theses and dissertations (ETDs). The backbone of the information system is a Docker container cluster running with Rancher and Kubernetes. Information retrieval and visualization is accomplished with containers in a pipelined fashion, whether in the cluster or on virtual machines, for Elasticsearch and Kibana, respectively. In addition to traditional searching and browsing, the system supports full-text and metadata searching. Search results include facets as a modern means of browsing among related documents. The system supports text analysis and machine learning to reveal new properties of collection data. These new properties assist in the generation of available facets. Recommendations are also presented with search results based on associations among documents and with logged user activity. The information system is co-designed by five teams of Virginia Tech graduate students, all members of the same computer science class, CS 5604. Although the project is an academicmore »