Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract The outsourcing of integrated circuit (IC) fabrication raises concerns of reverse-engineering, piracy, and overproduction of high-value intellectual property (IP). Logic locking was developed to address this by adding logic gates to a design to a chip’s functionality during fabrication. However, recent advances have revealed that logic locking is susceptible to physical probing attacks, such as electro-optical frequency mapping (EOFM). In this work, we proposeAdjoining Gates, a novel logic locking enhancement that places auxiliary logic gates near gates that leak key information when probed to obscure them, thereby mitigating EOFM-style attacks. To implement Adjoining Gates, we developed an open-source security verification and design automation algorithm that detects EOFM key leakage during placement and inserts Adjoining Gates in a design. Our evaluation shows that our proposed approach identified and mitigated all EOFM-extractable key leakage across 16 benchmarks of varying sizes, locking techniques, and probe resolutions with a 4.15% average gate count overhead.more » « less
-
Integrated circuits are often fabricated in untrusted facilities, making intellectual property privacy a concern. This prompted the development of logic locking, a security technique that corrupts the functionality of a design without a correct secret key. Prior work has shown that system-level phenomena can degrade the security of locking, highlighting the importance of configuring locking in a system. In this work, we propose a design space modeling framework to generate system-level models of the logic locking design space in arbitrary ICs by simulating a small, carefully-selected portion of the design space. These models are used to automatically identify near-optimal locking configurations in a system that achieve security goals with minimal power/area overhead. We evaluate our framework with two experiments. 1) We evaluate the quality of modeling-produced solutions by exhaustively simulating locking in a RISC-V ALU. The models produced by our algorithm had an average R^2 > 0.99 for all design objectives and identified a locking configuration within 96% of the globally optimal solution after simulating < 3.6% of the design space. 2) We compare our model-based locking to conventional module-level locking in a RISC-V processor. The locking configuration from our model-based approach required 29.5% less power on average than conventional approaches and was the only method to identify a solution meeting all design objectives.more » « less
-
Logic locking has been proposed to safeguard intellectual property (IP) during chip fabrication. Logic locking techniques protect hardware IP by making a subset of combinational modules in a design dependent on a secret key that is withheld from untrusted parties. If an incorrect secret key is used, a set of deterministic errors is produced in locked modules, restricting unauthorized use. A common target for logic locking is neural accelerators, especially as machine-learning-as-a-service becomes more prevalent. In this work, we explore how logic locking can be used to compromise the security of a neural accelerator it protects. Specifically, we show how the deterministic errors caused by incorrect keys can be harnessed to produce neural-trojan-style backdoors. To do so, we first outline a motivational attack scenario where a carefully chosen incorrect key, which we call a trojan key, produces misclassifications for an attacker-specified input class in a locked accelerator. We then develop a theoretically-robust attack methodology to automatically identify trojan keys. To evaluate this attack, we launch it on several locked accelerators. In our largest benchmark accelerator, our attack identified a trojan key that caused a 74% decrease in classification accuracy for attacker-specified trigger inputs, while degrading accuracy by only 1.7% for other inputs on average.more » « less
An official website of the United States government
