skip to main content


Search for: All records

Award ID contains: 2134689

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Federated learning (FL) has emerged as a new paradigm of machine learning (ML) with the goal of collaborative learning on the vast pool of private data available across distributed edge devices. The focus of most existing works in FL systems has been on addressing the challenges of computation and communication heterogeneity inherent in training with edge devices. However, the crucial impact of I/O and the role of limited on-device storage has not been explored fully in FL context. Without policies to exploit the on-device storage for placement of client data samples, and schedule clients based on I/O benefits, FL training can lead to inefficiencies, such as increased training time and impacted accuracy convergence. In this paper, we propose FedCaSe, a framework for efficiently caching client samples in-situ on limited on-device storage and scheduling client participation. FedCaSe boosts the I/O performance by exploiting a unique characteristic---the experience, i.e., relative impact on overall performance, of data samples and clients. FedCaSe utilizes this information in adaptive caching policies for sample placement inside the limited memory of edge clients. The framework also exploits the experience information to orchestrate the future selection of clients. Our experiments with representative workloads and policies show that compared to the state of the art, FedCaSe improves the training time by 2.06x for accuracy convergence at the scale of thousands of clients. 
    more » « less
    Free, publicly-accessible full text available November 20, 2025
  2. Understanding the fatigue behavior and accurately predicting the fatigue life of laser powder bed fusion (L-PBF) parts remain a pressing challenge due to complex failure mechanisms, time-consuming tests, and limited fatigue data. This study proposes a physics-informed data-driven framework, a multimodal transfer learning (MMTL) framework, to understand process-defect-fatigue relationships in L-PBF by integrating various modalities of fatigue performance, including process parameters, XCT-inspected defects, and fatigue test conditions. It aims to leverage a pre-trained model with abundant process and defect data in the source task to predict fatigue life nondestructive with limited fatigue test data in the target task. MMTL employs a hierarchical graph convolutional network (HGCN) to classify defects in the source task by representing process parameters and defect features in graphs, thereby enhancing its interpretability. The feature embedding learned from HGCN is then transferred to fatigue life modeling in neural network layers, enabling fatigue life prediction for L-PBF parts with limited data. MMTL validation through a numerical simulation and real-case study demonstrates its effectiveness, achieving an F1-score of 0.9593 in defect classification and a mean absolute percentage log error of 0.0425 in fatigue life prediction. MMTL can be extended to other applications with multiple modalities and limited data. 
    more » « less
    Free, publicly-accessible full text available October 9, 2025
  3. As the number of pre-trained machine learning (ML) models is growing exponentially, data reduction tools are not catching up. Existing data reduction techniques are not specifically designed for pre-trained model (PTM) dataset files. This is largely due to a lack of understanding of the patterns and characteristics of these datasets, especially those relevant to data reduction and compressibility.

    This paper presents the first, exhaustive analysis to date of PTM datasets on storage compressibility. Our analysis spans different types of data reduction and compression techniques, from hash-based data deduplication, data similarity detection, to dictionary-coding compression. Our analysis explores these techniques at three data granularity levels, from model layers, model chunks, to model parameters. We draw new observations that indicate that modern data reduction tools are not effective when handling PTM datasets. There is a pressing need for new compression methods that take into account PTMs' data characteristics for effective storage reduction.

    Motivated by our findings, we design Elf, a simple yet effective, error-bounded, lossy floating-point compression method. Elf transforms floating-point parameters in such a way that the common exponent field of the transformed parameters can be completely eliminated to save storage space. We develop Elves, a compression framework that integrates Elf along with several other data reduction methods. Elves uses the most effective method to compress PTMs that exhibit different patterns. Evaluation shows that Elves achieves an overall compression ratio of 1.52×, which is 1.31×, 1.32× and 1.29× higher than a general-purpose compressor (zstd), an error-bounded lossy compressor (SZ3), and the uniform model quantization, respectively, with negligible model accuracy loss.

     
    more » « less
    Free, publicly-accessible full text available April 1, 2025
  4. Serverless computing enables a new way of building and scaling cloud applications by allowing developers to write fine-grained serverless or cloud functions. The execution duration of a cloud function is typically short---ranging from a few milliseconds to hundreds of seconds. However, due to resource contentions caused by public clouds' deep consolidation, the function execution duration may get significantly prolonged and fail to accurately account for the function's true resource usage. We observe that the function duration can be highly unpredictable with huge amplification of more than 50× for an open-source FaaS platform (OpenLambda). Our experiments show that the OS scheduling policy of cloud functions' host server can have a crucial impact on performance. The default Linux scheduler, CFS (Completely Fair Scheduler), being oblivious to workloads, frequently context-switches short functions, causing a turnaround time that is much longer than their service time. We propose SFS (Smart Function Scheduler), which works entirely in the user space and carefully orchestrates existing Linux FIFO and CFS schedulers to approximate Shortest Remaining Time First (SRTF). SFS uses two-level scheduling that seamlessly combines a new FILTER policy with Linux CFS, to trade off increased duration of long functions for significant performance improvement for short functions. We implement SFS in the Linux user space and port it to OpenLambda. Evaluation results show that SFS significantly improves short functions' duration with a small impact on relatively longer functions, compared to CFS. 
    more » « less
  5. Laser beam powder bed fusion (LB-PBF) is a widely-used metal additive manufacturing process due to its high potential for fabrication flexibility and quality. Its process and performance optimization are key to improving product quality and promote further adoption of LB-PBF. In this article, the state-of-the-art machine learning (ML) applications for process and performance optimization in LB-PBF are reviewed. In these applications, ML is used to model the process-structure–property relationships in a data-driven way and optimize process parameters for high-quality fabrication. We review these applications in terms of their modeled relationships by ML (e.g., process—structure, process—property, or structure—property) and categorize the ML algorithms into interpretable ML, conventional ML, and deep ML according to interpretability and accuracy. This way may be particularly useful for practitioners as a comprehensive reference for selecting the ML algorithms according to the particular needs. It is observed that of the three types of ML above, conventional ML has been applied in process and performance optimization the most due to its balanced performance in terms of model accuracy and interpretability. To explore the power of ML in discovering new knowledge and insights, interpretation with additional steps is often needed for complex models arising from conventional ML and deep ML, such as model-agnostic methods or sensitivity analysis. In the future, enhancing the interpretability of ML, standardizing a systemic procedure for ML, and developing a collaborative platform to share data and findings will be critical to promote the integration of ML in LB-PBF applications on a large scale. 
    more » « less
  6. Three typical types of defects, i.e., keyholes, lack of fusion (LoF), and gas-entrapped pores (GEP), characterized by various features (e.g., volume, surface area, etc.), are generated under different process parameters of laser beam powder bed fusion (L-PBF) processes in additive manufacturing (AM). The different types of defects deteriorate the mechanical performance of L- PBF components, such as fatigue life, to a different extent. However, there is a lack of recognized approaches to classify the defects automatically and accurately in L-PBF components. This work presents a novel hierarchical graph convolutional network (H-GCN) to classify different types of defects by a cascading GCN structure with a low-level feature (e.g., defect features) layer and a high-level feature (e.g., process parameters) layer. Such an H-GCN not only leverages the multi- level information from process parameters and defect features to classify the defects but also explores the impact of process parameters on defect types and features. The H-GCN is evaluated through a case study with X-ray computed tomography (CT) L-PBF defect datasets and compared with several machine learning methods. H-GCN exhibits an outstanding classification performance with an F1-score of 1.000 and reveals the potential effect of process parameters on three types of defects. 
    more » « less
  7. Federated learning (FL) involves training a model over massive distributed devices, while keeping the training data localized and private. This form of collaborative learning exposes new tradeoffs among model convergence speed, model accuracy, balance across clients, and communication cost, with new challenges including: (1) straggler problem—where clients lag due to data or (computing and network) resource heterogeneity, and (2) communication bottleneck—where a large number of clients communicate their local updates to a central server and bottleneck the server. Many existing FL methods focus on optimizing along only one single dimension of the tradeoff space. Existing solutions use asynchronous model updating or tiering-based, synchronous mechanisms to tackle the straggler problem. However, asynchronous methods can easily create a communication bottleneck, while tiering may introduce biases that favor faster tiers with shorter response latencies. To address these issues, we present FedAT, a novel Federated learning system with Asynchronous Tiers under Non-i.i.d. training data. FedAT synergistically combines synchronous, intra-tier training and asynchronous, cross-tier training. By bridging the synchronous and asynchronous training through tiering, FedAT minimizes the straggler effect with improved convergence speed and test accuracy. FedAT uses a straggler-aware, weighted aggregation heuristic to steer and balance the training across clients for further accuracy improvement. FedAT compresses uplink and downlink communications using an efficient, polyline-encoding-based compression algorithm, which minimizes the communication cost. Results show that FedAT improves the prediction performance by up to 21.09% and reduces the communication cost by up to 8.5×, compared to state-of-the-art FL methods. 
    more » « less