skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Analysis of Security of Split Manufacturing Using Machine Learning
This paper is the first to analyze the security of split manufacturing using machine learning (ML), based on data collected from layouts provided by industry, with eight routing metal layers and significant variation in wire size and routing congestion across the layers. Many types of layout features are considered in our ML model, including those obtained from placement, routing, and cell sizes. Since the runtime cost of our basic ML procedure becomes prohibitively large for lower layers, we propose novel techniques to make it scalable with little sacrifice in the effectiveness of the attack. Moreover, we further improve the performance in the top routing layer by making use of higher quality training samples and by exploiting the routing convention. We also propose a validation-based proximity attack procedure, which generally outperforms our recent prior work. In the experiments, we analyze the ranking of the features used in our ML model and show how features vary in importance when moving to the lower layers. We provide comprehensive evaluation and comparison of our model with different configurations and demonstrate dramatically better performance of attacks compared to the prior work.  more » « less
Award ID(s):
1812600
PAR ID:
10157797
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE transactions on very large scale integration VLSI systems
Volume:
27
Issue:
12
ISSN:
1557-9999
Page Range / eLocation ID:
2767 - 2780
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This is the first work that incorporates recent advancements in "explainability" of machine learning (ML) to build a routing obfuscator called ObfusX. We adopt a recent metric—the SHAP value— which explains to what extent each layout feature can reveal each unknown connection for a recent ML-based split manufacturing attack model. The unique benefits of SHAP-based analysis include the ability to identify the best candidates for obfuscation, together with the dominant layout features which make them vulnerable. As a result, ObfusX can achieve better hit rate (97% lower) while perturbing significantly fewer nets when obfuscating using a via perturbation scheme, compared to prior work. When imposing the same wirelength limit using a wire lifting scheme, ObfusX performs significantly better in performance metrics (e.g., 2.4 times more reduction on average in percentage of netlist recovery). 
    more » « less
  2. Network-on-chip (NoC) is widely used to facilitate communication between components in sophisticated system-on-chip (SoC) designs. Security of the on-chip communication is crucial because exploiting any vulnerability in shared NoC would be a goldmine for an attacker that puts the entire computing infrastructure at risk. We investigate the security strength of existing anonymous routing protocols in NoC architectures, making two pivotal contributions. Firstly, we develop and perform a machine learning (ML)-based flow correlation attack on existing anonymous routing techniques in NoC systems, revealing that they provide only packet-level anonymity. Secondly, we propose a novel, lightweight anonymous routing protocol featuring outbound traffic tunneling and traffic obfuscation. This protocol is designed to provide robust defense against ML-based flow correlation attacks, ensuring both packet-level and flow-level anonymity. Experimental evaluation using both real and synthetic traffic demonstrates that our proposed attack successfully deanonymizes state-of-the-art anonymous routing in NoC architectures with high accuracy (up to 99%) for diverse traffic patterns. It also reveals that our lightweight anonymous routing protocol can defend against ML-based attacks with minor hardware and performance overhead. 
    more » « less
  3. We study the problem of out-of-distribution (OOD) detection, that is, detecting whether a machine learning (ML) model's output can be trusted at inference time. While a number of tests for OOD detection have been proposed in prior work, a formal framework for studying this problem is lacking. We propose a definition for the notion of OOD that includes both the input distribution and the ML model, which provides insights for the construction of powerful tests for OOD detection. We also propose a multiple hypothesis testing inspired procedure to systematically combine any number of different statistics from the ML model using conformal p-values. We further provide strong guarantees on the probability of incorrectly classifying an in-distribution sample as OOD. In our experiments, we find that threshold-based tests proposed in prior work perform well in specific settings, but not uniformly well across different OOD instances. In contrast, our proposed method that combines multiple statistics performs uniformly well across different datasets and neural networks architectures. 
    more » « less
  4. In this paper, we propose a canonical prune-and-SAT (CP&SAT) attack for breaking state-of-the-art routing-based obfuscation techniques. In the CP&SAT attack, we first encode the key-programmable routing blocks (keyRBs) based on an efficient SAT encoding mechanism suited for detailed routing constraints, and then efficiently re-encode and reduce the CNF corresponded to the keyRB using a bounded variable addition (BVA) algorithm. In the CP&SAT attack, this is done before subjecting the circuit to the SAT attack. We illustrate that this encoding and BVA-based pre-processing significantly reduces the size of the CNF corresponded to the routing-based obfuscated circuit, in the result of which we observe 100% success rate for breaking prior art routing-based obfuscation techniques. Further, we propose a new intercorrelated logic and routing locking technique, or in short InterLock, as a countermeasure to mitigate the CP&SAT attack. In Interlock, in addition to hiding the connectivity, a part of the logic (gates) in the selected timing paths are also implemented in the keyRB(s). We illustrate that when the logic gates are twisted with keyRBs, the BVA could not provide any advantage as a pre-processing step. Our experimental results show that, by using InterLock, with only three 8×8 or only two 16×16 keyRBs (twisted with actual logic gates), the resilience against existing attacks as well as our new proposed CP&SAT attack would be guaranteed while, on average, the delay/area overhead is less than 10% for even medium-size benchmark circuits. 
    more » « less
  5. Explainability is increasingly recognized as an enabling technology for the broader adoption of machine learning (ML), particularly for safety-critical applications. This has given rise to explainable ML, which seeks to enhance the explainability of neural networks through the use of explanators. Yet, the pursuit for better explainability inadvertently leads to increased security and privacy risks. While there has been considerable research into the security risks of explainable ML, its potential privacy risks remain under-explored. To bridge this gap, we present a systematic study of privacy risks in explainable ML through the lens of membership inference. Building on the observation that, besides the accuracy of the model, robustness also exhibits observable differences among member samples and non-member samples, we develop a new membership inference attack. This attack extracts additional membership features from changes in model confidence under different levels of perturbations guided by the importance highlighted by the attribution maps in the explanators. Intuitively, perturbing important features generally results in a bigger loss in confidence for member samples. Using the member-non-member differences in both model performance and robustness, an attack model is trained to distinguish the membership. We evaluated our approach with seven popular explanators across various benchmark models and datasets. Our attack demonstrates there is non-trivial privacy leakage in current explainable ML methods. Furthermore, such leakage issue persists even if the attacker lacks the knowledge of training datasets or target model architectures. Lastly, we also found existing model and output-based defense mechanisms are not effective in mitigating this new attack. 
    more » « less