skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: LIPSTICK: Corruptibility-Aware and Explainable Graph Neural Network-based Oracle-Less Attack on Logic Locking
In a zero-trust fabless paradigm, designers are increasingly concerned about hardware-based attacks on the semiconductor supply chain. Logic locking is a design-for-trust method that adds extra key-controlled gates in the circuits to prevent hardware intellectual property theft and overproduction. While attackers have traditionally relied on an oracle to attack logic-locked circuits, machine learning attacks have shown the ability to retrieve the secret key even without access to an oracle. In this paper, we first examine the limitations of state-of-the-art machine learning attacks and argue that the use of key hamming distance as the sole model-guiding structural metric is not always useful. Then, we develop, train, and test a corruptibility-aware graph neural network-based oracle-less attack on logic locking that takes into consideration both the structure and the behavior of the circuits. Our model is explainable in the sense that we analyze what the machine learning model has interpreted in the training process and how it can perform a successful attack. Chip designers may find this information beneficial in securing their designs while avoiding incremental fixes.  more » « less
Award ID(s):
2245247
PAR ID:
10569958
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-9354-5
Page Range / eLocation ID:
606 to 611
Format(s):
Medium: X
Location:
Incheon, Korea, Republic of
Sponsoring Org:
National Science Foundation
More Like this
  1. To enable trust in the IC supply chain, logic locking as an IP protection technique received significant attention in recent years. Over the years, by utilizing Boolean satisfiability (SAT) solver and its derivations, many de-obfuscation attacks have undermined the security of logic locking. Nonetheless, all these attacks receive the inputs (locked circuits) in a very simplified format (Bench or remapped and translated Verilog) with many limitations. This raises the bar for the usage of the existing attacks for modeling and assessing new logic locking techniques, forcing the designers to undergo many troublesome translations and simplifications. This paper introduces the RANE Attack, an open-source CAD-based toolbox for evaluating the security of logic locking mechanisms that implement a unique interface to use formal verification tools without a need for any translation or simplification. The RANE attack not only performs better compared to the existing de-obfuscation attacks, but it can also receive the library-dependent logic-locked circuits with no limitation in written, elaborated, or synthesized standard HDL, such as Verilog. We evaluated the capability/performance of RANE on FOUR case studies, one is the first de-obfuscation attack model on FSM locking solutions (e.g., HARPOON) in which the key is not a static bit-vector but a sequence of input patterns. 
    more » « less
  2. Oracle-less machine learning (ML) attacks have broken various logic locking schemes. Regular synthesis, which is tailored for area-power-delay optimization, yields netlists where key-gate localities are vulnerable to learning. Thus, we call for security-aware logic synthesis. We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe generator, employing adversarially trained models that can predict state-of-the-art attacks’ accuracies over wide ranges of recipes and key-gate localities. Experiments on ISCAS benchmarks confirm the attacks’ accuracies drops to around 50% for ALMOST-synthesized circuits, all while not undermining design optimization. 
    more » « less
  3. null (Ed.)
    Due to the globalization of semiconductor manufacturing and test processes, the system-on-a-chip (SoC) designers no longer design the complete SoC and manufacture chips on their own. This outsourcing of the design and manufacturing of Integrated Circuits (ICs) has resulted in several threats, such as overproduction of ICs, sale of out-of-specification/rejected ICs, and piracy of Intellectual Properties (IPs). Logic locking has emerged as a promising defense strategy against these threats. However, various attacks about the extraction of secret keys have undermined the security of logic locking techniques. Over the years, researchers have proposed different techniques to prevent existing attacks. In this article, we propose a novel attack that can break any logic locking techniques that rely on the stored secret key. This proposed TAAL attack is based on implanting a hardware Trojan in the netlist, which leaks the secret key to an adversary once activated. As an untrusted foundry can extract the netlist of a design from the layout/mask information, it is feasible to implement such a hardware Trojan. All three proposed types of TAAL attacks can be used for extracting secret keys. We have introduced the models for both the combinational and sequential hardware Trojans that evade manufacturing tests. An adversary only needs to choose one hardware Trojan out of a large set of all possible Trojans to launch the TAAL attack. 
    more » « less
  4. Deep neural networks are susceptible to model piracy and adversarial attacks when malicious end-users have full access to the model parameters. Recently, a logic locking scheme called HPNN has been proposed. HPNN utilizes hardware root-of-trust to prevent end-users from accessing the model parameters. This paper investigates whether logic locking is secure on deep neural networks. Specifically, it presents a systematic I/O attack that combines algebraic and learning-based approaches. This attack incrementally extracts key values from the network to minimize sample complexity. Besides, it employs a rigorous procedure to ensure the correctness of the extracted key values. Our experiments demonstrate the accuracy and efficiency of this attack on large networks with complex architectures. Consequently, we conclude that HPNN-style logic locking and its variants we can foresee are insecure on deep neural networks. 
    more » « less
  5. Piracy and overproduction of hardware intellectual properties are growing concerns for the semiconductor industry under the fabless paradigm. Although chip designers have attempted to secure their designs against these threats by means of logic locking and obfuscation, due to the increasing number of powerful oracle-guided attacks, they are facing an ever-increasing challenge in evaluating the security of their designs and their associated overhead. Especially while many so-called "provable" logic locking techniques are subjected to a novel attack surface, overcoming these attacks may impose a huge overhead on the circuit. Thus, in this paper, after investigating the shortcoming of state-of-the-art graph neural network models in logic locking and refuting the use of hamming distance as a proper key accuracy metric, we employ two machine learning models, a decision tree to predict the security degree of the locked benchmarks and a convolutional neural network to assign a low-overhead and secure locking scheme to a given circuit. We first build multi-label datasets by running different attacks on locked benchmarks with existing logic locking methods to evaluate the security and compute the imposed area overhead. Then, we design and train a decision tree model to learn the features of the created dataset and predict the security degree of each given locked circuit. Furthermore, we utilize a convolutional neural network model to extract more features, obtain higher accuracy, and consider overhead. Then, we put our trained models to the test against different unseen benchmarks. The experimental results reveal that the convolutional neural network model has a higher capability for extracting features from unseen, large datasets which comes in handy in assigning secure and low-overhead logic locking to a given netlist. 
    more » « less