Abstract There is an urgent need for developing collaborative process-defect modeling in metal-based additive manufacturing (AM). This mainly stems from the high volume of training data needed to develop reliable machine learning models for in-situ anomaly detection. The requirements for large data are especially challenging for small-to-medium manufacturers (SMMs), for whom collecting copious amounts of data is usually cost prohibitive. The objective of this research is to develop a secured data sharing mechanism for directed energy deposition (DED) based AM without disclosing product design information, facilitating secured data aggregation for collaborative modeling. However, one major obstacle is the privacy concerns that arise from data sharing, since AM process data contain confidential design information, such as the printing path. The proposed adaptive design de-identification for additive manufacturing (ADDAM) methodology integrates AM process knowledge into an adaptive de-identification procedure to mask the printing trajectory information in metal-based AM thermal history, which otherwise discloses substantial printing path information. This adaptive approach applies a flexible data privacy level to each thermal image based on its similarity with the other images, facilitating better data utility preservation while protecting data privacy. A real-world case study was used to validate the proposed method based on the fabrication of two cylindrical parts using a DED process. These results are expressed as a Pareto optimal solution, demonstrating significant improvements in privacy gain and minimal utility loss. The proposed method can facilitate privacy improvements of up to 30% with as little as 0% losses in dataset utility after de-identification.
more »
« less
This content will become publicly available on November 7, 2025
Ontology-guided Data Sharing and Federated Quality Control with Differential Privacy in Additive Manufacturing
Abstract The scarcity of measured data for defect identification often challenges the development and certification of additive manufacturing processes. Knowledge transfer and sharing have become emerging solutions to small-data challenges in quality control to improve machine learning with limited data, but this strategy raises concerns regarding privacy protection. Existing zero-shot learning and federated learning methods are insufficient to represent, select, and mask data to share and control privacy loss quantification. This study integrates differential privacy in cybersecurity with federated learning to investigate sharing strategies of manufacturing defect ontology. The method first proposes using multilevel attributes masked by noise in defect ontology as the sharing data structure to characterize manufacturing defects. Information leaks due to the sharing of ontology branches and data are estimated by epsilon differential privacy (DP). Under federated learning, the proposed method optimizes sharing defect ontology and image data strategies to improve zero-shot defect classification given privacy budget limits. The proposed framework includes (1) developing a sharing strategy based on multilevel attributes in defect ontology with controllable privacy leaks, (2) optimizing joint decisions in differential privacy, zero-shot defect classification, and federated learning, and (3) developing a two-stage algorithm to solve the joint optimization, combining stochastic gradient descent search for classification models and an evolutionary algorithm for exploring data-sharing strategies. A case study on zero-shot learning of additive manufacturing defects demonstrated the effectiveness of the proposed method in data-sharing strategies, such as ontology sharing, defect classification, and cloud information use.
more »
« less
- Award ID(s):
- 1901109
- PAR ID:
- 10555909
- Publisher / Repository:
- American Society of Mechanical Engineers
- Date Published:
- Journal Name:
- Journal of Computing and Information Science in Engineering
- ISSN:
- 1530-9827
- Page Range / eLocation ID:
- 1 to 15
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Deep learning has impacted defect prediction in additive manufacturing (AM), which is important to ensure process stability and part quality. However, its success depends on extensive training, requiring large, homogeneous datasets—remaining a challenge for the AM industry, particularly for small- and medium-sized enterprises (SMEs). The unique and varied characteristics of AM parts, along with the limited resources of SMEs, hamper data collection, posing difficulties in the independent training of deep learning models. Addressing these concerns requires enabling knowledge sharing from the similarities in the physics of the AM process and defect formation mechanisms while carefully handling privacy concerns. Federated learning (FL) offers a solution to allow collaborative model training across multiple entities without sharing local data. This article introduces an FL framework to predict section-wise heat emission during laser powder bed fusion (LPBF), a vital process signature. It incorporates a customized long short-term memory (LSTM) model for each client, capturing the dynamic AM process's time-series properties without sharing sensitive information. Three advanced FL algorithms are integrated—federated averaging (FedAvg), FedProx, and FedAvgM—to aggregate model weights rather than raw datasets. Experiments demonstrate that the FL framework ensures convergence and maintains prediction performance comparable to individually trained models. This work demonstrates the potential of FL-enabled AM modeling and prediction where SMEs can improve their product quality without compromising data privacy.more » « less
-
Abstract Machine learning (ML) models are used for in-situ monitoring in additive manufacturing (AM) for defect detection. However, sensitive information stored in ML models, such as part designs, is at risk of data leakage due to unauthorized access. To address this, differential privacy (DP) introduces noise into ML, outperforming cryptography, which is slow, and data anonymization, which does not guarantee privacy. While DP enhances privacy, it reduces the precision of defect detection. This paper proposes combining DP with Hyperdimensional Computing (HDC), a brain-inspired model that memorizes training sample information in a large hyperspace, to optimize real-time monitoring in AM while protecting privacy. Adding DP noise to the HDC model protects sensitive information without compromising defect detection accuracy. Our studies demonstrate the effectiveness of this approach in monitoring anomalies, such as overhangs, using high-speed melt pool data analysis. With a privacy budget set at 1, our model achieved an F-score of 94.30%, surpassing traditional models like ResNet50, DenseNet201, EfficientNet B2, and AlexNet, which have performance up to 66%. Thus, the intersection of DP and HDC promises accurate defect detection and protection of sensitive information in AM. The proposed method can also be extended to other AM processes, such as fused filament fabrication.more » « less
-
null (Ed.)Background The use of wearables facilitates data collection at a previously unobtainable scale, enabling the construction of complex predictive models with the potential to improve health. However, the highly personal nature of these data requires strong privacy protection against data breaches and the use of data in a way that users do not intend. One method to protect user privacy while taking advantage of sharing data across users is federated learning, a technique that allows a machine learning model to be trained using data from all users while only storing a user’s data on that user’s device. By keeping data on users’ devices, federated learning protects users’ private data from data leaks and breaches on the researcher’s central server and provides users with more control over how and when their data are used. However, there are few rigorous studies on the effectiveness of federated learning in the mobile health (mHealth) domain. Objective We review federated learning and assess whether it can be useful in the mHealth field, especially for addressing common mHealth challenges such as privacy concerns and user heterogeneity. The aims of this study are to describe federated learning in an mHealth context, apply a simulation of federated learning to an mHealth data set, and compare the performance of federated learning with the performance of other predictive models. Methods We applied a simulation of federated learning to predict the affective state of 15 subjects using physiological and motion data collected from a chest-worn device for approximately 36 minutes. We compared the results from this federated model with those from a centralized or server model and with the results from training individual models for each subject. Results In a 3-class classification problem using physiological and motion data to predict whether the subject was undertaking a neutral, amusing, or stressful task, the federated model achieved 92.8% accuracy on average, the server model achieved 93.2% accuracy on average, and the individual model achieved 90.2% accuracy on average. Conclusions Our findings support the potential for using federated learning in mHealth. The results showed that the federated model performed better than a model trained separately on each individual and nearly as well as the server model. As federated learning offers more privacy than a server model, it may be a valuable option for designing sensitive data collection methods.more » « less
-
Requiring less data for accurate models, few-shot learning has shown robustness and generality in many application domains. However, deploying few-shot models in untrusted environments may inflict privacy concerns, e.g., attacks or adversaries that may breach the privacy of user-supplied data. This paper studies the privacy enhancement for the few-shot learning in an untrusted environment, e.g., the cloud, by establishing a novel privacy-preserved embedding space that preserves the privacy of data and maintains the accuracy of the model. We examine the impact of various image privacy methods such as blurring, pixelization, Gaussian noise, and differentially private pixelization (DP-Pix) on few-shot image classification and propose a method that learns privacy-preserved representation through the joint loss. The empirical results show how privacy-performance trade-off can be negotiated for privacy-enhanced few-shot learning.more » « less
An official website of the United States government
