<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>Benchmarking Large Language Models for Automated Verilog RTL Code Generation</dc:title><dc:creator>Thakur, Shailja; Ahmad, Baleegh; Fan, Zhenxing; Pearce, Hammond; Tan, Benjamin; Karri, Ramesh; Dolan-Gavitt, Brendan; Garg, Siddharth</dc:creator><dc:corporate_author/><dc:editor/><dc:description>Automating hardware design could obviate a signif-icant amount of human error from the engineering process and lead to fewer errors. Verilog is a popular hardware description language to model and design digital systems, thus generating Verilog code is a critical first step. Emerging large language models (LLMs) are able to write high-quality code in other programming languages. In this paper, we characterize the ability of LLMs to generate useful Verilog. For this, we fine-tune pre-trained LLMs on Verilog datasets collected from GitHub and Verilog textbooks. We construct an evaluation framework comprising test-benches for functional analysis and a flow to test the syntax of Verilog code generated in response to problems of varying difficulty. Our findings show that across our problem scenarios, the fine-tuning results in LLMs more capable of producing syntactically correct code (25.9% overall). Further, when analyzing functional correctness, a fine-tuned open-source CodeGen LLM can outperform the state-of-the-art commercial Codex LLM (6.5% overall). We release our training/evaluation scripts and LLM checkpoints as open source contributions.</dc:description><dc:publisher/><dc:date>2023-04-01</dc:date><dc:nsf_par_id>10419705</dc:nsf_par_id><dc:journal_name>2023 Design, Automation &amp; Test in Europe Conference &amp; Exhibition (DATE)</dc:journal_name><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation>1 to 6</dc:page_range_or_elocation><dc:issn/><dc:isbn/><dc:doi>https://doi.org/10.23919/DATE56975.2023.10137086</dc:doi><dcq:identifierAwardId>2039607</dcq:identifierAwardId><dc:subject/><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>