This content will become publicly available on July 1, 2025
In the past decade, we have witnessed an exponential growth of deep learning models, platforms, and applications. While existing DL applications and Machine Learning as a service (MLaaS) frameworks assume fully trusted models, the need for privacy-preserving DNN evaluation arises. In a secure multi-party computation scenario, both the model and the data are considered proprietary, i.e., the model owner does not want to reveal the highly valuable DL model to the user, while the user does not wish to disclose their private data samples either. Conventional privacy-preserving deep learning solutions ask the users to send encrypted samples to the model owners, who must handle the heavy lifting of ciphertext-domain computation with homomorphic encryption. In this paper, we present a novel solution, namely, PrivDNN, which (1) offloads the computation to the user side by sharing an encrypted deep learning model with them, (2) significantly improves the efficiency of DNN evaluation using partial DNN encryption, (3) ensures model accuracy and model privacy using a core neuron selection and encryption scheme. Experimental results show that PrivDNN reduces privacy-preserving DNN inference time and memory requirement by up to 97% while maintaining model performance and privacy. Codes can be found at https://github.com/LiangqinRen/PrivDNN
more » « less- Award ID(s):
- 2014552
- PAR ID:
- 10545683
- Publisher / Repository:
- PoPETs
- Date Published:
- Journal Name:
- Proceedings on Privacy Enhancing Technologies
- Volume:
- 2024
- Issue:
- 3
- ISSN:
- 2299-0984
- Page Range / eLocation ID:
- 477 to 494
- Subject(s) / Keyword(s):
- Privacy-preserving Deep Learning, Homomorphic Encryption
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Because of the lack of expertise, to gain benefits from their data, average users have to upload their private data to cloud servers they may not trust. Due to legal or privacy constraints, most users are willing to contribute only their encrypted data, and lack interests or resources to join deep neural network (DNN) training in cloud. To train a DNN on encrypted data in a completely non-interactive way, a recent work proposes a fully homomorphic encryption (FHE)-based technique implementing all activations by \textit{Brakerski-Gentry-Vaikuntanathan} (BGV)-based lookup tables. However, such inefficient lookup-table-based activations significantly prolong private training latency of DNNs. In this paper, we propose, Glyph, an FHE-based technique to fast and accurately train DNNs on encrypted data by switching between TFHE (Fast Fully Homomorphic Encryption over the Torus) and BGV cryptosystems. Glyph uses logic-operation-friendly TFHE to implement nonlinear activations, while adopts vectorial-arithmetic-friendly BGV to perform multiply-accumulations (MACs). Glyph further applies transfer learning on DNN training to improve test accuracy and reduce the number of MACs between ciphertext and ciphertext in convolutional layers. Our experimental results show Glyph obtains state-of-the-art accuracy, and reduces training latency by 69%~99% over prior FHE-based privacy-preserving techniques on encrypted datasets.more » « less
-
Abstract Industry 4.0 drives exponential growth in the amount of operational data collected in factories. These data are commonly distributed and stored in different business units or cooperative companies. Such data-rich environments increase the likelihood of cyber attacks, privacy breaches, and security violations. Also, this poses significant challenges on analytical computing on sensitive data that are distributed among different business units. To fill this gap, this article presents a novel privacy-preserving framework to enable federated learning on siloed and encrypted data for smart manufacturing. Specifically, we leverage fully homomorphic encryption (FHE) to allow for computation on ciphertexts and generate encrypted results that, when decrypted, match the results of mathematical operations performed on the plaintexts. Multilayer encryption and privacy protection reduce the likelihood of data breaches while maintaining the prediction performance of analytical models. Experimental results in real-world case studies show that the proposed framework yields superior performance to reduce the risk of cyber attacks and harness siloed data for smart manufacturing.
-
In the era of cloud computing and big data analysis, how to efficiently share and utilize medical information scattered across various care providers has become a critical problem. This paper proposes a new framework for sharing medical data in a secure and privacy-preserving way. This framework holistically integrates multi-authority attribute based encryption, blockchain and smart contract, as well as software defined networking to define and enforce sharing policies. Specifically in our framework, patients' medical records are encrypted and stored in hospital databases, where strict access controls are enforced with attribute based encryption coupled with privacy level classification. Our framework leverages blockchain technology to connect scattered private databases from participating hospitals for efficient and secure data provision, smart contracts to enable the business logic of clinical data usage, and software defined networking to revoke sharing privileges. The performance evaluation of our prototype demonstrates that the associated computation costs are reasonable in practice.more » « lessThere is great demand for scalable, secure, and efficient privacy-preserving machine learning models that can be trained over distributed data. While deep learning models typically achieve the best results in a centralized non-secure setting, different models can excel when privacy and communication constraints are imposed. Instead, tree-based approaches such as XGBoost have attracted much attention for their high performance and ease of use; in particular, they often achieve state-of-the-art results on tabular data. Consequently, several recent works have focused on translating Gradient Boosted Decision Tree (GBDT) models like XGBoost into federated settings, via cryptographic mechanisms such as Homomorphic Encryption (HE) and Secure Multi-Party Computation (MPC). However, these do not always provide formal privacy guarantees, or consider the full range of hyperparameters and implementation settings. In this work, we implement the GBDT model under Differential Privacy (DP). We propose a general framework that captures and extends existing approaches for differentially private decision trees. Our framework of methods is tailored to the federated setting, and we show that with a careful choice of techniques it is possible to achieve very high utility while maintaining strong levels of privacy.more » « less