skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 1, 2026

Title: Heterogeneous Multi-Source Data Fusion Through Input Mapping and Latent Variable Gaussian Process
Abstract Artificial intelligence and machine learning frameworks have become powerful tools for establishing computationally efficient mappings between inputs and outputs in engineering problems. These mappings have enabled optimization and analysis routines, leading to innovative designs, advanced material systems, and optimized manufacturing processes. In such modeling efforts, it is common to encounter multiple information (data) sources, each varying in specifications. Data fusion frameworks offer the capability to integrate these diverse sources into unified models, enhancing predictive accuracy and enabling knowledge transfer. However, challenges arise when these sources are heterogeneous, i.e., they do not share the same input parameter space. Such scenarios occur when domains differentiated by complexity such as fidelity, operating conditions, experimental setup, and scale, require distinct parametrizations. To address this challenge, a two-stage heterogeneous multi-source data fusion framework based on the input mapping calibration (IMC) and the latent variable Gaussian process (LVGP) is proposed. In the first stage, the IMC algorithm transforms the heterogeneous input parameter spaces into a unified reference parameter space. In the second stage, an LVGP-enabled multi-source data fusion model constructs a single-source-aware surrogate model on the unified reference space. The framework is demonstrated and analyzed through three engineering modeling case studies with distinct challenges: cantilever beams with varying design parametrizations, ellipsoidal voids with varying complexities and fidelities, and Ti6Al4V alloys with varying manufacturing modalities. The results demonstrate that the proposed framework achieves higher predictive accuracy compared to both independent single-source and source-unaware data fusion models.  more » « less
Award ID(s):
2219489
PAR ID:
10630111
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ASME Journal of Mechanical Design
Date Published:
Journal Name:
Journal of Mechanical Design
Volume:
147
Issue:
4
ISSN:
1050-0472
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Managing and preparing complex data for deep learning, a prevalent approach in large-scale data science can be challenging. Data transfer for model training also presents difficulties, impacting scientific fields like genomics, climate modeling, and astronomy. A large-scale solution like Google Pathways with a distributed execution environment for deep learning models exists but is proprietary. Integrating existing open-source, scalable runtime tools and data frameworks on high-performance computing (HPC) platforms is crucial to address these challenges. Our objective is to establish a smooth and unified method of combining data engineering and deep learning frameworks with diverse execution capabilities that can be deployed on various high-performance computing platforms, including cloud and supercomputers. We aim to support heterogeneous systems with accelerators, where Cylon and other data engineering and deep learning frameworks can utilize heterogeneous execution. To achieve this, we propose Radical-Cylon, a heterogeneous runtime system with a parallel and distributed data framework to execute Cylon as a task of Radical Pilot. We thoroughly explain Radical-Cylon’s design and development and the execution process of Cylon tasks using Radical Pilot. This approach enables the use of heterogeneous MPI-Communicators across multiple nodes. Radical-Cylon achieves better performance than Bare-Metal Cylon with minimal and constant overhead. Radical-Cylon achieves (4 15)% faster execution time than batch execution while performing similar join and sort operations with 35 million and 3.5 billion rows with the same resources. The approach aims to excel in both scientific and engineering research HPC systems while demonstrating robust performance on cloud infrastructures. This dual capability fosters collaboration and innovation within the open-source scientific research community.Not Available 
    more » « less
  2. null (Ed.)
    Abstract The Global Precipitation Measurement (GPM) constellation of spaceborne sensors provides a variety of direct and indirect measurements of precipitation processes. Such observations can be employed to derive spatially and temporally consistent gridded precipitation estimates either via data-driven retrieval algorithms or by assimilation into physically based numerical weather models. We compare the data-driven Integrated Multisatellite Retrievals for GPM (IMERG) and the assimilation-enabled NASA-Unified Weather Research and Forecasting (NU-WRF) model against Stage IV reference precipitation for four major extreme rainfall events in the southeastern United States using an object-based analysis framework that decomposes gridded precipitation fields into storm objects. As an alternative to conventional “grid-by-grid analysis,” the object-based approach provides a promising way to diagnose spatial properties of storms, trace them through space and time, and connect their accuracy to storm types and input data sources. The evolution of two tropical cyclones are generally captured by IMERG and NU-WRF, while the less organized spatial patterns of two mesoscale convective systems pose challenges for both. NU-WRF rain rates are generally more accurate, while IMERG better captures storm location and shape. Both show higher skill in detecting large, intense storms compared to smaller, weaker storms. IMERG’s accuracy depends on the input microwave and infrared data sources; NU-WRF does not appear to exhibit this dependence. Findings highlight that an object-oriented view can provide deeper insights into satellite precipitation performance and that the satellite precipitation community should further explore the potential for “hybrid” data-driven and physics-driven estimates in order to make optimal usage of satellite observations. 
    more » « less
  3. null (Ed.)
    Abstract Powder bed fusion (PBF) additive manufacturing has enabled unmatched agile manufacturing of a wide range of products from engine components to medical implants. While high-fidelity finite element modeling and feedback control have been identified key for predicting and engineering part qualities in PBF, existing results in each realm are developed in opposite computational architectures wildly different in time scale. Integrating both realms, this paper builds a first-instance closed-loop simulation framework by utilizing the output signals retrieved from the finite element model (FEM) to directly update the control signals sent to the model. The proposed closed-loop simulation enables testing the limits of advanced controls in PBF and surveying the parameter space fully to generate more predictable part qualities. Along the course of formulating the framework, we verify the FEM by comparing its results with experimental and analytical solutions and then use the FEM to understand the melt-pool evolution induced by the in-layer thermomechanical interactions. From there, we build a repetitive control algorithm to greatly attenuate variations of the melt pool width. 
    more » « less
  4. Customized accelerators have revolutionized modern computing by delivering substantial gains in energy efficiency and performance through hardware specialization. Field-Programmable Gate Arrays (FPGAs) play a crucial role in this paradigm, offering unparalleled flexibility and high-performance potential. High-Level Synthesis (HLS) and source-to-source compilers have simplified FPGA development by translating high-level programming languages into hardware descriptions enriched with directives. However, achieving high Quality of Results (QoR) remains a significant challenge, requiring intricate code transformations, strategic directive placement, and optimized data communication. This article presentsPrometheus, a holistic optimization framework that integrates key optimizations - includingtask fusion, tiling, loop permutation, computation-communication overlap, and concurrent task execution-into a unified design space. By leveragingNon-Linear Programming (NLP) methodologies, Prometheus explores the optimization space under strict resource constraints, enabling automatic bitstream generation. Unlike existing frameworks, Prometheus considers interdependent transformations and dynamically balances computation and memory access. We evaluate Prometheus across multiple benchmarks, demonstrating its ability to maximize parallelism, minimize execution stalls, and optimize data movement. The results showcase its superior performance compared to state-of-the-art FPGA optimization frameworks, highlighting its effectiveness in delivering high QoR while reducing manual tuning efforts. 
    more » « less
  5. null (Ed.)
    Abstract A high-precision additive manufacturing (AM) process, powder bed fusion (PBF) has enabled unmatched agile manufacturing of a wide range of products from engine components to medical implants. While finite element modeling and closed-loop control have been identified key for predicting and engineering part qualities in PBF, existing results in each realm are developed in opposite computational architectures wildly different in time scale. This paper builds a first-instance closed-loop simulation framework by integrating high-fidelity finite element modeling with feedback controls originally developed for general mechatronics systems. By utilizing the output signals (e.g., melt pool width) retrieved from the finite element model (FEM) to update directly the control signals (e.g., laser power) sent to the model, the proposed closed-loop framework enables testing the limits of advanced controls in PBF and surveying the parameter space fully to generate more predictable part qualities. Along the course of formulating the framework, we verify the FEM by comparing its results with experimental and analytical solutions and then use the FEM to understand the melt-pool evolution induced by the in- and cross-layer thermomechanical interactions. From there, we build a repetitive control (RC) algorithm to attenuate variations of the melt pool width. 
    more » « less