skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: CAPTIVE: Constrained Adversarial Perturbations to Thwart IC Reverse Engineering
Reverse engineering (RE) in Integrated Circuits (IC) is a process in which one will attempt to extract the internals of an IC, extract the circuit structure, and determine the gate-level information of an IC. In general, the RE process can be done for validation as well as Intellectual Property (IP) stealing intentions. In addition, RE also facilitates different illicit activities such as the insertion of hardware Trojan, pirating, or counterfeiting a design, or developing an attack. In this work, we propose an approach to introduce cognitive perturbations, with the aid of adversarial machine learning, to the IC layout that could prevent the RE process from succeeding. We first construct a layer-by-layer image dataset of 45 nm predictive technology. With this dataset, we propose a conventional neural network model called RecoG-Net to recognize the logic gates, which is the first step in RE. RecoG-Net is successful in recognizing the gates with more than 99.7% accuracy. Our thwarting approach utilizes the concept of adversarial attack generation algorithms to generate perturbation. Unlike traditional adversarial attacks in machine learning, the perturbation generation needs to be highly constrained to meet the fab rules such as Design Rule Checking (DRC) Layout vs. Schematic (LVS) checks. Hence, we propose CAPTIVE as a constrained perturbation generation satisfying the DRC. The experiments show that the accuracy of reverse engineering using machine learning techniques can decrease from 100% to approximately 30% based on the adversary generator.  more » « less
Award ID(s):
2213486 2213404
PAR ID:
10502530
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Information 2023
Date Published:
Journal Name:
Information
Volume:
14
Issue:
12
ISSN:
2078-2489
Page Range / eLocation ID:
656
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    With many fabless companies outsourcing integrated circuit (IC) fabrication, the extent of design information recoverable by any third-party foundry remains clouded. While traditional reverse engineering schemes from the layout employ expensive high-resolution imaging techniques to recover design information, the extent of design information that can be recovered by the foundry remains ambiguous. To address this ambiguity, we propose ReGDS, a layout reverse engineering (RE) framework, posing as an inside-foundry attack to acquire original design intent. Our framework uses the layout, in GDSII format, and the technology library to extract the transistor-level connectivity information, and exploits unique relationship-based matching to identify logic gates and thereby, recover the original gate-level netlist. Employing circuits ranging from few hundreds to millions of transistors, we validate the scalability of our framework and demonstrate 100% recovery of the original design from the layout. To further validate the effectiveness of the framework in the presence of obfuscation schemes, we apply ReGDS to layouts of conventional XOR/MUX locked circuits and successfully recover the obfuscated netlist. By applying the Boolean SATisfiability (SAT) attack on the recovered obfuscated netlist, one can recover the entire key and, thereby, retrieve the original design intent. Thus ReGDS results in accelerated acquisition of the gate-level netlist by the attacker, in comparison to imaging-based RE schemes. Our experiments unearth the potential threat of possible intellectual property (IP) piracy at any third-party foundry. 
    more » « less
  2. The DARPA POSH program echoes with the research community and identifies that engineering productivity has fallen behind Moore’s law, resulting in the prohibitive increase in IC design cost at leading technology nodes. The primary reason is that it requires many computing resources, expensive tools, and even many days to complete a design implementation. However, at the end of this process, some designs could not meet the design constraints and become unroutable, creating a vicious circuit design cycle. As a result, designers have to re-run the whole process after design modification. This research applied a machine learning approach to automatically identify design constraints and design rule checking (DRC) violation issues and help the designer identify design constraints with optimal DRCs before the long detailed routing process through iterative greedy search. The proposed algorithm achieved up to 99.99% design constraint prediction accuracy and reduced 98.4% DRC violations with only a 6.9% area penalty. 
    more » « less
  3. As the real-world applications (image segmentation, speech recognition, machine translation, etc.) are increasingly adopting Deep Neural Networks (DNNs), DNN's vulnerabilities in a malicious environment have become an increasingly important research topic in adversarial machine learning. Adversarial machine learning (AML) focuses on exploring vulnerabilities and defensive techniques for machine learning models. Recent work has shown that most adversarial audio generation methods fail to consider audios' temporal dependency (TD) (i.e., adversarial audios exhibit weaker TD than benign audios). As a result, the adversarial audios are easily detectable by examining their TD. Therefore, one area of interest in the audio AML community is to develop a novel attack that evades a TD-based detection model. In this contribution, we revisit the LSTM model for audio transcription and propose a new audio attack algorithm that evades the TD-based detection by explicitly controlling the TD in generated adversarial audios. The experimental results show that the detectability of our adversarial audio is significantly reduced compared to the state-of-the-art audio attack algorithms. Furthermore, experiments also show that our adversarial audios remain nearly indistinguishable from benign audios with only negligible perturbation magnitude. 
    more » « less
  4. Deep learning has been widely applied in various VLSI design automation tasks, from layout quality estimation to design optimization. Though deep learning has shown state-of-the-art performance in several applications, recent studies reveal that deep neural networks exhibit intrinsic vulnerability to adversarial perturbations, which pose risks in the ML-aided VLSI design flow. One of the most effective strategies to improve robustness is regularization approaches, which adjust the optimization objective to make the deep neural network generalize better. In this paper, we examine several adversarial defense methods to improve the robustness of ML-based lithography hotspot detectors. We present an innovative design rule checking (DRC)-guided curvature regularization (CURE) approach, which is customized to robustify ML-based lithography hotspot detectors against white-box attacks. Our approach allows for improvements in both the robustness and the accuracy of the model. Experiments show that the model optimized by DRC-guided CURE achieves the highest robustness and accuracy compared with those trained using the baseline defense methods. Compared with the vanilla model, DRC-guided CURE decreases the average attack success rate by 53.9% and increases the average ROC-AUC by 12.1%. Compared with the best of the defense baselines, DRC-guided CURE reduces the average attack success rate by 18.6% and improves the average ROC-AUC by 4.3%. 
    more » « less
  5. null (Ed.)
    Logic locking has been widely evaluated as a proactive countermeasure against the hardware security threats within the IC supply chain. However, the introduction of the SAT attack, and many of its derivatives, has raised big concern about this form of countermeasure. In this paper, we explore the possibility of exploiting chaos computing as a new means of logic locking. We introduce the concept of chaotic logic locking, called ChaoLock, in which, by leveraging asymmetric inputs in digital chaotic Boolean gates, we define the concept of programmability (key-configurability) to the sets of underlying initial conditions and system parameters. These initial conditions and system parameters determine the operation (functionality) of each digital chaotic Boolean gate. Also, by proposing dummy inputs in chaotic Boolean gates, we show that during reverse-engineering, the dummy inputs conceal the main functionality of the chaotic Boolean gates, which make the reverse-engineering almost impossible. By performing a security analysis of ChaoLock, we show that with no restriction on conventional CMOS-based ASIC implementation and with no test/debug compromising, none of the state-of-the-art attacks on logic locking, including the SAT attack, could reformulate chaotic Boolean gates while dummy inputs are involved and their parameters are locked. Our analysis and experimental results show that with a low number of chaotic Boolean gates mixed with CMOS digital gates, ChaoLock can guarantee resiliency against the state-of-the-art attacks on logic locking at low overhead. 
    more » « less