skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: BUILD-KG: Integrating Heterogeneous Data Into Analytics-Enabling Knowledge Graphs
Knowledge graphs (KGs), with their flexible encoding of heterogeneous data, have been increasingly used in a variety of applications. At the same time, domain data are routinely stored in formats such as spreadsheets, text, or figures. Storing such data in KGs can open the door to more complex types of analytics, which might not be supported by the data sources taken in isolation. Giving domain experts the option to use a predefined automated workflow for integrating heterogeneous data from multiple sources into a single unified KG could significantly alleviate their data-integration time and resource burden, while potentially resulting in higher-quality KG data capable of enabling meaningful rule mining and machine learning.In this paper we introduce a domain-agnostic workflow called BUILD-KG for integrating heterogeneous scientific and experimental data from multiple sources into a single unified KG potentially enabling richer analytics. BUILD-KG is broadly applicable, accepting input data in popular structured and unstructured formats. BUILD-KG is also designed to be carried out with end users as humans-in-the-loop, which makes it domain aware. We present the workflow, report on our experiences with applying it to scientific and experimental data in the materials science domain, and provide suggestions for involving domain scientists in BUILD-KG as humans-in-the-loop.  more » « less
Award ID(s):
2019435
PAR ID:
10519850
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-2445-7
Page Range / eLocation ID:
2965 to 2974
Format(s):
Medium: X
Location:
Sorrento, Italy
Sponsoring Org:
National Science Foundation
More Like this
  1. McHenry, K; Schreiber, L (Ed.)
    The paleogeosciences are becoming more and more interdisciplinary, and studies increasingly rely on large collections of data derived from multiple data repositories. Integrating diverse datasets from multiple sources into complex workflows increases the challenge of creating reproducible and open science, as data formats and tools are often noninteroperable, requiring manual manipulation of data into standardized formats, resulting in a disconnect in data provenance and confounding reproducibility. Here we present a notebook that demonstrates how the Linked PaleoData (LiPD) framework is used as an interchange format to allow data from multiple data sources to be integrated in a complex workflow using emerging packages in R for geochronological uncertainty quantification and abrupt change detection. Specifically, in this notebook, we use the neotoma2 and lipdR packages to access paleoecological data from the Neotoma Database, and paleoclimate data from compilations hosted on Lipdverse. Age uncertainties for datasets from both sources are then quantified using the geoChronR package, and those data, and their associated age uncertainties, are then investigated for abrupt changes using the actR package, with accompanying visualizations. The result is an integrated, reproducible workflow in R that demonstrates how this complex series of multisource data integration, analysis and visualization can be integrated into an efficient, open scientific narrative. 
    more » « less
  2. To address the rapid growth of scientific publications and data in biomedical research, knowledge graphs (KGs) have become a critical tool for integrating large volumes of heterogeneous data to enable efficient information retrieval and automated knowledge discovery. However, transforming unstructured scientific literature into KGs remains a significant challenge, with previous methods unable to achieve human-level accuracy. Here we used an information extraction pipeline that won first place in the LitCoin Natural Language Processing Challenge (2022) to construct a large-scale KG named iKraph using all PubMed abstracts. The extracted information matches human expert annotations and significantly exceeds the content of manually curated public databases. To enhance the KG’s comprehensiveness, we integrated relation data from 40 public databases and relation information inferred from high-throughput genomics data. This KG facilitates rigorous performance evaluation of automated knowledge discovery, which was infeasible in previous studies. We designed an interpretable, probabilistic-based inference method to identify indirect causal relations and applied it to real-time COVID-19 drug repurposing from March 2020 to May 2023. Our method identified around 1,200 candidate drugs in the first 4 months, with one-third of those discovered in the first 2 months later supported by clinical trials or PubMed publications. These outcomes are very challenging to attain through alternative approaches that lack a thorough understanding of the existing literature. A cloud-based platform (https://biokde.insilicom.com) was developed for academic users to access this rich structured data and associated tools. 
    more » « less
  3. Abstract Artificial intelligence and machine learning frameworks have become powerful tools for establishing computationally efficient mappings between inputs and outputs in engineering problems. These mappings have enabled optimization and analysis routines, leading to innovative designs, advanced material systems, and optimized manufacturing processes. In such modeling efforts, it is common to encounter multiple information (data) sources, each varying in specifications. Data fusion frameworks offer the capability to integrate these diverse sources into unified models, enhancing predictive accuracy and enabling knowledge transfer. However, challenges arise when these sources are heterogeneous, i.e., they do not share the same input parameter space. Such scenarios occur when domains differentiated by complexity such as fidelity, operating conditions, experimental setup, and scale, require distinct parametrizations. To address this challenge, a two-stage heterogeneous multi-source data fusion framework based on the input mapping calibration (IMC) and the latent variable Gaussian process (LVGP) is proposed. In the first stage, the IMC algorithm transforms the heterogeneous input parameter spaces into a unified reference parameter space. In the second stage, an LVGP-enabled multi-source data fusion model constructs a single-source-aware surrogate model on the unified reference space. The framework is demonstrated and analyzed through three engineering modeling case studies with distinct challenges: cantilever beams with varying design parametrizations, ellipsoidal voids with varying complexities and fidelities, and Ti6Al4V alloys with varying manufacturing modalities. The results demonstrate that the proposed framework achieves higher predictive accuracy compared to both independent single-source and source-unaware data fusion models. 
    more » « less
  4. As large-scale scientific simulations and big data analyses become more popular, it is increasingly more expensive to store huge amounts of raw simulation results to perform post-analysis. To minimize the expensive data I/O, “in-situ” analysis is a promising approach, where data analysis applications analyze the simulation generated data on the fly without storing it first. However, it is challenging to organize, transform, and transport data at scales between two semantically different ecosystems due to the distinct software and hardware difference. To tackle these challenges, we design and implement the X-Composer framework. X-Composer connects cross-ecosystem applications to form an “in-situ” scientific workflow, and provides a unified approach and recipe for supporting such hybrid in-situ workflows on distributed heterogeneous resources. X-Composer reorganizes simulation data as continuous data streams and feeds them seamlessly into the Cloud-based stream processing services to minimize I/O overheads. For evaluation, we use X-Composer to set up and execute a cross-ecosystem workflow, which consists of a parallel Computational Fluid Dynamics simulation running on HPC, and a distributed Dynamic Mode Decomposition analysis application running on Cloud. Our experimental results show that X-Composer can seamlessly couple HPC and Big Data jobs in their own native environments, achieve good scalability, and provide high-fidelity analytics for ongoing simulations in real-time. 
    more » « less
  5. Large scientific facilities are unique and complex infrastructures that have become fundamental instruments for enabling high quality, world-leading research to tackle scientific problems at unprecedented scales. Cyberinfrastructure (CI) is an essential component of these facilities, providing the user community with access to data, data products, and services with the potential to transform data into knowledge. However, the timely evolution of the CI available at large facilities is challenging and can result in science communities requirements not being fully satisfied. Furthermore, integrating CI across multiple facilities as part of a scientific workflow is hard, resulting in data silos. In this paper, we explore how science gateways can provide improved user experiences and services that may not be offered at large facility datacenters. Using a science gateway supported by the Science Gateway Community Institute, which provides subscription-based delivery of streamed data and data products from the NSF Ocean Observatories Initiative (OOI), we propose a system that enables streaming-based capabilities and workflows using data from large facilities, such as the OOI, in a scalable manner. We leverage data infrastructure building blocks, such as the Virtual Data Collaboratory, which provides data and comput- ing capabilities in the continuum to efficiently and collaboratively integrate multiple data-centric CIs, build data-driven workflows, and connect large facilities data sources with NSF-funded CI, such as XSEDE. We also introduce architectural solutions for running these workflows using dynamically provisioned federated CI. 
    more » « less