skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on November 7, 2025

Title: Ontology-guided Data Sharing and Federated Quality Control with Differential Privacy in Additive Manufacturing
Abstract

The scarcity of measured data for defect identification often challenges the development and certification of additive manufacturing processes. Knowledge transfer and sharing have become emerging solutions to small-data challenges in quality control to improve machine learning with limited data, but this strategy raises concerns regarding privacy protection. Existing zero-shot learning and federated learning methods are insufficient to represent, select, and mask data to share and control privacy loss quantification. This study integrates differential privacy in cybersecurity with federated learning to investigate sharing strategies of manufacturing defect ontology. The method first proposes using multilevel attributes masked by noise in defect ontology as the sharing data structure to characterize manufacturing defects. Information leaks due to the sharing of ontology branches and data are estimated by epsilon differential privacy (DP). Under federated learning, the proposed method optimizes sharing defect ontology and image data strategies to improve zero-shot defect classification given privacy budget limits. The proposed framework includes (1) developing a sharing strategy based on multilevel attributes in defect ontology with controllable privacy leaks, (2) optimizing joint decisions in differential privacy, zero-shot defect classification, and federated learning, and (3) developing a two-stage algorithm to solve the joint optimization, combining stochastic gradient descent search for classification models and an evolutionary algorithm for exploring data-sharing strategies. A case study on zero-shot learning of additive manufacturing defects demonstrated the effectiveness of the proposed method in data-sharing strategies, such as ontology sharing, defect classification, and cloud information use.

 
more » « less
Award ID(s):
1901109
PAR ID:
10555909
Author(s) / Creator(s):
; ; ;
Editor(s):
Wang, Yan; Yang, Hui
Publisher / Repository:
American Society of Mechanical Engineers
Date Published:
Journal Name:
Journal of Computing and Information Science in Engineering
ISSN:
1530-9827
Page Range / eLocation ID:
1 to 15
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract There is an urgent need for developing collaborative process-defect modeling in metal-based additive manufacturing (AM). This mainly stems from the high volume of training data needed to develop reliable machine learning models for in-situ anomaly detection. The requirements for large data are especially challenging for small-to-medium manufacturers (SMMs), for whom collecting copious amounts of data is usually cost prohibitive. The objective of this research is to develop a secured data sharing mechanism for directed energy deposition (DED) based AM without disclosing product design information, facilitating secured data aggregation for collaborative modeling. However, one major obstacle is the privacy concerns that arise from data sharing, since AM process data contain confidential design information, such as the printing path. The proposed adaptive design de-identification for additive manufacturing (ADDAM) methodology integrates AM process knowledge into an adaptive de-identification procedure to mask the printing trajectory information in metal-based AM thermal history, which otherwise discloses substantial printing path information. This adaptive approach applies a flexible data privacy level to each thermal image based on its similarity with the other images, facilitating better data utility preservation while protecting data privacy. A real-world case study was used to validate the proposed method based on the fabrication of two cylindrical parts using a DED process. These results are expressed as a Pareto optimal solution, demonstrating significant improvements in privacy gain and minimal utility loss. The proposed method can facilitate privacy improvements of up to 30% with as little as 0% losses in dataset utility after de-identification. 
    more » « less
  2. null (Ed.)
    Background The use of wearables facilitates data collection at a previously unobtainable scale, enabling the construction of complex predictive models with the potential to improve health. However, the highly personal nature of these data requires strong privacy protection against data breaches and the use of data in a way that users do not intend. One method to protect user privacy while taking advantage of sharing data across users is federated learning, a technique that allows a machine learning model to be trained using data from all users while only storing a user’s data on that user’s device. By keeping data on users’ devices, federated learning protects users’ private data from data leaks and breaches on the researcher’s central server and provides users with more control over how and when their data are used. However, there are few rigorous studies on the effectiveness of federated learning in the mobile health (mHealth) domain. Objective We review federated learning and assess whether it can be useful in the mHealth field, especially for addressing common mHealth challenges such as privacy concerns and user heterogeneity. The aims of this study are to describe federated learning in an mHealth context, apply a simulation of federated learning to an mHealth data set, and compare the performance of federated learning with the performance of other predictive models. Methods We applied a simulation of federated learning to predict the affective state of 15 subjects using physiological and motion data collected from a chest-worn device for approximately 36 minutes. We compared the results from this federated model with those from a centralized or server model and with the results from training individual models for each subject. Results In a 3-class classification problem using physiological and motion data to predict whether the subject was undertaking a neutral, amusing, or stressful task, the federated model achieved 92.8% accuracy on average, the server model achieved 93.2% accuracy on average, and the individual model achieved 90.2% accuracy on average. Conclusions Our findings support the potential for using federated learning in mHealth. The results showed that the federated model performed better than a model trained separately on each individual and nearly as well as the server model. As federated learning offers more privacy than a server model, it may be a valuable option for designing sensitive data collection methods. 
    more » « less
  3. Requiring less data for accurate models, few-shot learning has shown robustness and generality in many application domains. However, deploying few-shot models in untrusted environments may inflict privacy concerns, e.g., attacks or adversaries that may breach the privacy of user-supplied data. This paper studies the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud, by establishing a novel privacy-preserved embedding space that preserves the privacy of data and maintains the accuracy of the model. We examine the impact of various image privacy methods such as blurring, pixelization, Gaussian noise, and differentially private pixelization (DP-Pix) on few-shot image classification and propose a method that learns privacy-preserved representation through the joint loss. The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning. 
    more » « less
  4. Gradient leakage attacks are dominating privacy threats in federated learning, despite the default privacy that training data resides locally at the clients. Differential privacy has been the de facto standard for privacy protection and is deployed in federated learning to mitigate privacy risks. However, much existing literature points out that differential privacy fails to defend against gradient leakage. The paper presents ModelCloak, a principled approach based on differential privacy noise, aiming for safe-sharing client local model updates. The paper is organized into three major components. First, we introduce the gradient leakage robustness trade-off, in search of the best balance between accuracy and leakage prevention. The trade-off relation is developed based on the behavior of gradient leakage attacks throughout the federated training process. Second, we demonstrate that a proper amount of differential privacy noise can offer the best accuracy performance within the privacy requirement under a fixed differential privacy noise setting. Third, we propose dynamic differential privacy noise and show that the privacy-utility trade-off can be further optimized with dynamic model perturbation, ensuring privacy protection, competitive accuracy, and leakage attack prevention simultaneously. 
    more » « less
  5. Motivated by the ever-increasing concerns on personal data privacy and the rapidly growing data volume at local clients, federated learning (FL) has emerged as a new machine learning setting. An FL system is comprised of a central parameter server and multiple local clients. It keeps data at local clients and learns a centralized model by sharing the model parameters learned locally. No local data needs to be shared, and privacy can be well protected. Nevertheless, since it is the model instead of the raw data that is shared, the system can be exposed to the poisoning model attacks launched by malicious clients. Furthermore, it is challenging to identify malicious clients since no local client data is available on the server. Besides, membership inference attacks can still be performed by using the uploaded model to estimate the client's local data, leading to privacy disclosure. In this work, we first propose a model update based federated averaging algorithm to defend against Byzantine attacks such as additive noise attacks and sign-flipping attacks. The individual client model initialization method is presented to provide further privacy protections from the membership inference attacks by hiding the individual local machine learning model. When combining these two schemes, privacy and security can be both effectively enhanced. The proposed schemes are proved to converge experimentally under non-lID data distribution when there are no attacks. Under Byzantine attacks, the proposed schemes perform much better than the classical model based FedAvg algorithm. 
    more » « less