skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2528805

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Knowledge graph (KG) learning offers a powerful framework for generating new knowledge and making inferences. Training KG embedding can take a significantly long time, especially for larger datasets. Our analysis shows that the gradient computation of embedding is one of the dominant functions in the translation-based KG embedding training loop. We address this issue by replacing the core embedding computation with SpMM (Sparse-Dense Matrix Multiplication) kernels. This allows us to unify multiple scatter (and gather) operations as a single operation, reducing training time and memory usage. We create a general framework for training KG models using sparse kernels and implement four models, namely TransE, TransR, TransH, and TorusE. Our sparse implementations exhibit up to 5.3x speedup on the CPU and up to 4.2x speedup on the GPU with a significantly low GPU memory footprint. The speedups are consistent across large and small datasets for a given model. Our proposed sparse approach can be extended to accelerate other \revise{translation-based (such as TransC, TransM, etc.) and non-translational (such as DistMult, ComplEx, RotatE, etc.) models as well. 
    more » « less
    Free, publicly-accessible full text available May 11, 2026
  2. We develop a comprehensive framework for storing, analyzing, forecasting, and visualizing industrial energy systems consisting of multiple devices and sensors. Our framework models complex energy systems as a dynamic knowledge graph, utilizes a novel machine learning (ML) model for energy forecasting, and visualizes continuous predictions through an interactive dashboard. At the core of this framework is A-RNN, a simple yet efficient model that uses dynamic attention mechanisms for automated feature selection. We validate the model using datasets from two manufacturers and one university testbed containing hundreds of sensors. Our results show that A-RNN forecasts energy usage within 5% of observed values. These enhanced predictions are as much as 50% more accurate than those produced by standard RNN models that rely on individual features and devices. Additionally, A-RNN identifies key features that impact forecasting accuracy, providing interpretability for model forecasts. Our analytics platform is computationally and memory efficient, making it suitable for deployment on edge devices and in manufacturing plants. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026