<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>TAPA: A Scalable Task-parallel Dataflow Programming Framework for Modern FPGAs with Co-optimization of HLS and Physical Design</title></titleStmt>
			<publicationStmt>
				<publisher>ACM Transactions on Reconfigurable Technology and Systems</publisher>
				<date>12/31/2023</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10550474</idno>
					<idno type="doi">10.1145/3609335</idno>
					<title level='j'>ACM Transactions on Reconfigurable Technology and Systems</title>
<idno>1936-7406</idno>
<biblScope unit="volume">16</biblScope>
<biblScope unit="issue">4</biblScope>					

					<author>Licheng Guo</author><author>Yuze Chi</author><author>Jason Lau</author><author>Linghao Song</author><author>Xingyu Tian</author><author>Moazin Khatti</author><author>Weikang Qiao</author><author>Jie Wang</author><author>Ecenur Ustun</author><author>Zhenman Fang</author><author>Zhiru Zhang</author><author>Jason Cong</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[<p>In this article, we propose TAPA, an end-to-end framework that compiles a C++ task-parallel dataflow program into a high-frequency FPGA accelerator. Compared to existing solutions, TAPA has two major advantages. First, TAPA provides a set of convenient APIs that allows users to easily express flexible and complex inter-task communication structures. Second, TAPA adopts a coarse-grained floorplanning step during HLS compilation for accurate pipelining of potential critical paths. In addition, TAPA implements several optimization techniques specifically tailored for modern HBM-based FPGAs. In our experiments with a total of 43 designs, we improve the average frequency from 147 MHz to 297 MHz (a 102% improvement) with no loss of throughput and a negligible change in resource utilization. Notably, in 16 experiments, we make the originally unroutable designs achieve 274 MHz, on average. The framework is available at<ext-link ext-link-type='url' href='https://github.com/UCLA-VAST/tapa'>https://github.com/UCLA-VAST/tapa</ext-link>and the core floorplan module is available at<ext-link ext-link-type='url' href='https://github.com/UCLA-VAST/AutoBridge'>https://github.com/UCLA-VAST/AutoBridge</ext-link></p>]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>High-level synthesis (HLS) with the task-parallel programming model is an important tool to help programmers scale up the performance of their accelerators on modern FPGAs with everincreasing resource capacities. Task-level parallelism is a form of parallelization of computer programs across multiple processors. In contrast to data parallelism where the workload is partitioned on data and each processor executes the same program (e.g., OpenMP <ref type="bibr">[29]</ref>), different processors, or modules, in a task-parallel program often behave differently, while data are passed between processors. Examples of task-parallel programs include image processing pipelines <ref type="bibr">[17,</ref><ref type="bibr">54,</ref><ref type="bibr">67]</ref>, graph processing <ref type="bibr">[30,</ref><ref type="bibr">31,</ref><ref type="bibr">84,</ref><ref type="bibr">99]</ref>, and network switching <ref type="bibr">[63]</ref>. Researches show that even for data-parallel applications such as neural networks <ref type="bibr">[73,</ref><ref type="bibr">74,</ref><ref type="bibr">83]</ref> and stencil computation <ref type="bibr">[17]</ref>, task-parallel implementations show better scalability and higher frequency than their data-parallel counterparts due to the localized communication pattern <ref type="bibr">[25]</ref>.</p><p>Even though task-parallel programs are suitable for spatial architectures, existing FPGA computer-aided design (CAD) toolchains often fail in timing closure. One major cause that leads to the unsatisfactory frequency is that HLS cannot easily predict the physical layout of the design after placement and routing. Thus, HLS tools typically rely on pre-characterized operation delays and a crude interconnect delay model to insert clock boundaries (i.e., registers) into an untimed design to generate a timed RTL implementation <ref type="bibr">[36,</ref><ref type="bibr">79,</ref><ref type="bibr">98]</ref>. Hence, as the HLS accelerator system designs get larger to fully leverage the resources of a modern FPGA, the behavior-level estimation becomes even less accurate and the timing quality of the synthesized RTLs usually further degrades.</p><p>This timing issue is worsened as modern FPGA architectures become increasingly heterogeneous <ref type="bibr">[90]</ref>. Modern FPGAs have thousands of heterogeneous digital signal processing (DSP) and random-access memory (RAM) blocks and millions of lookup table (LUT) and flip-flop (FF) instances. To pack more logic onto a single device, the latest FPGAs integrate multiple dies using silicon interposers, but the interconnects that go across the die boundaries would carry a non-trivial delay penalty. In addition, specialized IP blocks such as PCIe and DDR controllers are embedded in the programmable logic. These IP blocks usually have fixed locations near-dedicated I/O banks and will consume a large number of programmable resources nearby. As a result, these dedicated IPs often detour the signals close by towards more expensive and/or longer routing paths. This complexity and heterogeneity significantly challenge the effectiveness and efficiency of modern FPGA CAD workflow.</p><p>Moreover, the recent release of High Bandwidth Memory (HBM)-based FPGA boards brings even more challenges to the timing closure of HLS designs. The key feature of the HBM device is that a super-wide data interface is exposed in a local region, which often causes severe local congestion. For example, the AMD/Xilinx U280 FPGA features 32 independent HBM channels at the bottom of the device, each with a 256-bit width running at 450 MHz. To fully utilize the potential of the HBM bandwidth, the physical design tools need to squeeze a substantial amount of logic into the area nearby the HBM blocks.</p><p>TAPA: A Scalable Task-parallel Dataflow Programming Framework for Modern FPGAs</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>63:3</head><p>The current timing-closure struggle in multi-die FPGAs originates from a disconnection between the HLS step and the downstream physical design step. The existing FPGA compilation stack consists of a sequence of independent design and optimization steps to lower the design abstraction gradually, but these steps lack cross-stack coordination and optimizations. Given a C++ task-parallel program input of an accelerator system, HLS can adjust the output RTL to change the pipeline level of the data links between tasks (modules), which are usually latency-insensitive, to break critical paths; however, the tool does not know which ones will be timing-critical. However, the physical design tools could determine the critical paths but no longer have the option to add extra registers because physical design tools honor the cycle-accurate RTL input.</p><p>Several prior attempts couple the physical design process with HLS compilation <ref type="bibr">[6,</ref><ref type="bibr">9,</ref><ref type="bibr">23,</ref><ref type="bibr">48,</ref><ref type="bibr">77,</ref><ref type="bibr">78,</ref><ref type="bibr">94,</ref><ref type="bibr">98]</ref>. Zheng et al. <ref type="bibr">[98]</ref> propose to iteratively run placement and routing to obtain accurate delay statistics of each wire and operator. Based on the post-route information, HLS re-runs the scheduling step for a better pipelining; Cong et al. <ref type="bibr">[23]</ref> is another representative work that presents placement-driven scheduling and binding for multi-cycle communications in an islandstyle architecture similar to FPGAs. Kim et al. <ref type="bibr">[48]</ref> propose to combine architectural synthesis with placement under distributed-register architecture to minimize the system latency. Stammermann et al. <ref type="bibr">[77]</ref> proposed methods to simultaneously perform floorplanning and functional unit binding to reduce power on interconnects. Chen et. al. <ref type="bibr">[9]</ref> propose implementing HLS as a sub-routine to adjust the delay/power/variability/area of the circuit modules during the physical planning process across different IC layers, which only improves timing by 8%. The previous approaches share the common aspect of focusing on the fine-grained interaction between HLS and physical design, where individual operators and the associated wires and registers are all involved during the delay prediction and iterative HLS-layout co-optimization. While such a fine-grained method can be effective on relatively small HLS designs and FPGA devices, it is too expensive (if not infeasible) for today's large designs targeting multi-die FPGAs, where each implementation iteration from HLS to bitstream may take days to complete.</p><p>Therefore, we propose to re-structure the CAD stack and partially combine physical design with HLS in a coarse-grained fashion. Specifically, we propose to couple the coarse-grained floorplanning step with behavior-level pipelining in HLS. Our coarse-grained floorplanning involves dividing the FPGA device into a grid of regions and assigning each task to one region during HLS compilation. We further pipeline all the inter-region connections to facilitate timing closure while we leave the intra-region optimization to the default HLS tool. As our experiment will show, floorplanning a 4-die FPGA into only 6-8 regions is already enough to properly guide HLS for accurate elimination of global critical paths, thus our floorplan-guided HLS approach is lightweight and highly scalable.</p><p>Our methodology relieves local congestion and fixes global critical paths at the same time. First, the early floorplanning step could guide the subsequent placement steps to distribute the user logic evenly across the entire device instead of attempting to pack the logic into a single die as much as possible, which aims to alleviate local congestion as much as possible. Second, the floorplan provides HLS a view of the global physical layout that helps HLS accurately identify and pipeline the long wires, especially those crossing the die boundaries, so the global critical paths could be appropriately pipelined. Finally, we present analysis and latency balancing algorithms to guarantee that the throughput of the resulting design is not negatively impacted. Our contributions are as follows:</p><p>-To the best of our knowledge, we are the first to tackle the challenge of high-frequency HLS design on multi-die FPGAs by coupling floorplanning and pipelining to effectively insert registers on the long cross-die interconnects. We further ensure that the additional latency does not affect the throughput of the design.  We first invoke the TAPA compiler to extract the parallel tasks and synthesize each task using Vitis HLS to get its RTL representation and obtain an estimated area. Then the AutoBridge <ref type="bibr">[35]</ref> module of TAPA floorplans the program and determines a target region for each task. Based on the floorplan, we intelligently compute the pipeline stages of the communication logic between tasks and ensure that throughput will not degrade. TAPA generates the actual RTL of the pipeline logic that composes together the tasks. A constraint file is also generated to pass the floorplan information to the downstream tools.</p><p>-We present a set of optimizations specifically tailored for HBM devices, including automatic HBM port binding, floorplan solution space exploration, and a customized programming API to minimize the area overhead of HBM IO modules. -Our framework, TAPA, interfaces with the commercial FPGA design tool flow. It improves the average frequency of 43 designs from 147 MHz to 297 MHz with a negligible area overhead.</p><p>This article extends the two prior publications <ref type="bibr">[18,</ref><ref type="bibr">35]</ref> of the authors on this topic. Compared to the prior papers, this article includes additional contributions as follows:</p><p>-We integrate the co-optimization methodology from Reference <ref type="bibr">[35]</ref> with the programming framework in Reference <ref type="bibr">[18]</ref> to provide a fully automated, programmer-friendly, and robust workflow that consistently achieves higher frequency compared to existing commercial toolchains. -We extend the framework of Reference <ref type="bibr">[18]</ref> with additional APIs for external memory access, which has significantly lowered BRAM consumption. This optimization enables the successful implementation of large-scale accelerators onto modern HBM-based FPGAs (Section 3.4). -We extend the co-optimization methods of Reference <ref type="bibr">[35]</ref> to consider the special challenges of programming HBM-based FPGAs, including automatic HBM channel binding and multifloorplanning generation (Section 6). -We add four extra benchmarks that use a large number of HBM channels. We demonstrate how our new optimization enables them to be successfully implemented with high frequency (Section 7.4).</p><p>Figure <ref type="figure">1</ref> shows the overall flow of our proposed methodology. The rest of the article is organized as follows: Section 2 introduces background information on modern FPGA architectures and shows motivating examples; Section 3 presents our proposed task-parallel programming model; Section 4 details our coarse-grained floorplan scheme inside the HLS flow; Section 5 describes our floorplanaware pipelining methods; Section 6 elaborates our techniques tailored for HBM-based FPGAs; Section 7 presents experimental results; Section 8 provides related work, followed by conclusion and acknowledgments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">BACKGROUND AND MOTIVATING EXAMPLES</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">High-level Synthesis</head><p>The rapid increase of complexity in FPGA design has pushed the industry and academia to raise the design abstractions with better productivity than the register transfer level (RTL). High-level TAPA: A Scalable Task-parallel Dataflow Programming Framework for Modern FPGAs 63:5 synthesis (HLS) plays a central role by enabling the automatic synthesis of high-level, untimed, or partially timed specifications (e.g., C++ or OpenCL) to low-level cycle-accurate RTL specifications for efficient implementation in the field programmable gate arrays (FPGAs) or applicationspecific integrated circuits (ASICs). The typical flow of modern FPGA HLS tools usually consists of three core steps: (1) scheduling, (2) binding, and (3) RTL generation.</p><p>-The scheduling phase inserts clock boundaries into the original untimed specification. It takes in the control data flow graph (CDFG) generated by a compiler front-end, which parses the source code and performs target-independent optimizations, from the high-level description (e.g., C/C++) and then maps the operations in the CDFG to the states and the control flow to state transitions specified by a finite-state machine (FSM). In each clock cycle, the controller would be in a state in the corresponding FSM. -The binding process maps high-level operations and variables to RTL-level resources, such as functional units and registers. It maps variables to registers and links wires from registers to functional units as operands of operations. The result of a functional unit is then wired to another functional unit or a register to store the computed value. -The RTL generation phase creates concrete RTL based on the results of the scheduling and the binding steps. The key in this step is to properly create the control logic to orchestrate the datapath, controlling each stage to execute at its scheduled cycle.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Task-parallel Dataflow Programs</head><p>Task-level parallelism is a form of parallelization of computer programs across multiple processors.</p><p>In contrast to data parallelism where the workload is partitioned on data and each processor executes the same program (e.g., OpenMP <ref type="bibr">[29]</ref>), different processors in a task-parallel program often behave differently, while data are passed between processors. For example, the multiple stages in an image-processing pipeline <ref type="bibr">[17,</ref><ref type="bibr">54,</ref><ref type="bibr">67]</ref> can each be implemented in a different processor. Taskparallel programs are often described using dataflow models <ref type="bibr">[5,</ref><ref type="bibr">41,</ref><ref type="bibr">46,</ref><ref type="bibr">51,</ref><ref type="bibr">66]</ref>, where tasks are called processes. Processes communicate only through unidirectional channels. Data exchanged between channels are called tokens. In this article, we borrow the terms channel and token and focus on the problem of statically mapping tasks to hardware. That is, instances of tasks are synthesized to different areas in an FPGA accelerator.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Multi-die FPGA Architectures</head><p>Figure <ref type="figure">2</ref> shows three representative multi-die FPGA architectures, each of which is described in more detail as follows:</p><p>-The Xilinx Alveo U250 FPGA is one of the largest FPGAs with four dies. All the I/O banks are located in the middle column and the four DDR controller IPs are positioned vertically in a tall-and-slim rectangle in the middle. On the right lies the Vitis platform region <ref type="bibr">[91]</ref>, which incorporates the DMA IP, the PCIe IP, and so on, and serves to communicate with the host CPU. -The Xilinx Alveo U280 FPGA is integrated with the latest HBM <ref type="bibr">[20,</ref><ref type="bibr">21,</ref><ref type="bibr">92]</ref>, which exposes 32 independent memory ports at the bottom of the chip. I/O banks are located in the middle columns. Meanwhile, there is a gap region void of programmable logic in the middle. -The Intel Stratix 10 FPGA <ref type="bibr">[42]</ref> also sets the DDR controller and I/O banks in the middle of the programmable logic. The embedded multi-die interconnect bridges and the PCIe blocks are distributed at the two sides of the die, allowing multiple dies to be integrated together on an FPGA package. Although this article uses the Xilinx FPGAs to demonstrate the idea, our methodology is also applicable to Intel FPGAs and other architectures.   Compared to previous generations, the latest multi-die FPGA architectures are divided into disjoint regions, where the region-crossing naturally incurs additional signal delay. In addition, the large pre-located IPs consume significant programmable resources near their fixed locations that may also cause local routing congestion. These characteristics can hamper the existing HLS flows from achieving a high frequency.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Motivating Examples</head><p>We show two examples to motivate our floorplan-guided HLS approach. First, Figure <ref type="figure">3</ref> shows a CNN accelerator implemented on the Xilinx U250 FPGA. It interacts with three DDR controllers, as marked in grey, pink, and yellow blocks in the figure. In the original implementation result, the whole design is packed close together within die 2 and die 3. To demonstrate our proposed idea, we first manually floorplan the design to distribute the logic in four dies and to avoid overlapping the user logic with DDR controllers. Additionally, we pipeline the FIFO channels connecting modules in different dies as demonstrated in the figure. The manual approach improves the final frequency by 53%, from 216 MHz to 329 MHz.</p><p>Second, Figure <ref type="figure">4</ref> shows a stencil computation design on the Xilinx U280 FPGA. It consists of four identical tasks in linear topology with each color representing a kernel. In the original implementation, the tool's choice of die-crossing wires is sub-optimal, and one kernel may be divided among multiple regions. Instead in our approach, we pre-determine all the die-crossing wires during HLS compilation and pipeline them, so the die boundaries will not cause problems for the placement and routing tool. For this example, we achieve 297 MHz while the design is originally unroutable. 1 using namespace tapa; 8 for (int i = 0; i &lt; n; ++i) stream &lt;&lt; mmap[i]; 9 } 10 11 void Store(istream&lt;int&gt;&amp; stream, mmap&lt;int&gt; mmap, int n) { // to be compiled by Vitis HLS 12 for (int i = 0; i &lt; n; ++i) stream &gt;&gt; mmap[i]; 13 } 14 15 void VecAdd(mmaps&lt;int, PE_NUM&gt; mem_1, mmaps&lt;int, PE_NUM&gt; mem_2, int n) { // to be compiled by TAPA 16 streams&lt;int, PE_NUM, FIFO_DEPTH&gt; str_a, str_b, str_c; 17 tapa::task() 18 .invoke&lt;PE_NUM&gt;(Load, mem_1, str_a, n) 19 .invoke&lt;PE_NUM&gt;(Load, mem_2, str_b, n) 20 .invoke&lt;PE_NUM&gt;(Add, str_a, str_b, str_c, n) 21 .invoke&lt;PE_NUM&gt;(Store, str_c, mem_2, n); 22 }</p><p>Listing 1. Accelerator task instantiation in TAPA.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">PROGRAMMING MODEL AND INTERFACES</head><p>In this section, we present the detailed task-programming model of TAPA and the user interfaces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Basic Concepts</head><p>TAPA dataflow programs explicitly decouple communication and computation. A TAPA program has two types of building blocks: streams and tasks. A stream will be mapped to a FIFO in hardware; a task consumes data from input streams, performs arbitrary computation, then produces data into other output streams. All tasks execute in parallel and communicate with each other through streams. Listing 1 shows an example TAPA program that instantiates PE_NUM kernels, and each kernel loads two vectors from external memory, adds them up, and stores the results back into external memory. The VecAdd function instantiates the three lower-level tasks and defines the communication streams between them. It takes 3 arguments: 3 mmap interfaces for the 3 vectors and one scalar for the vector length. 3 communication streams are defined in VecAdd. The 3 lower-level tasks are instantiated 4 times in total because there are 2 input vectors, each of which needs its own Load. The VecAdd function is an upper-level task. It is also the top-level task that defines the interface between the kernel and the host. Once the 4 children task instances are instantiated, they will run in parallel and their parent will wait until all children finish.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Hierarchical Programming Model</head><p>TAPA uses a hierarchical programming model. Each task is either a leaf that does not instantiate any streams or tasks or a collection of tasks and streams with which the tasks communicate. A task that instantiates a set of tasks and streams is called the parent task for that set. Correspondingly, the instantiated tasks are the children tasks of their parent, which may be parents of their own children. Each stream must be connected to exactly two tasks. One of the tasks must act as a producer and the other must act as a consumer. Meanwhile, each task could connect to an arbitrary number of streams. The producer streams tokens to the consumer via the stream in the first-infirst-out (FIFO) order. Each task is implemented as a C++ function, which can communicate with each other via the communication interface. A parent task instantiates streams and tasks using the instantiation interface and waits until all its child tasks finish. One of the tasks is designated as the top-level task, which defines the communication interfaces external to the FPGA accelerator, i.e., the system integration interface.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Convenient Programming Interfaces</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.1">Communication</head><p>Interface. TAPA provides separate communication APIs for the producer side and the consumer side, which use ostream and istream as the interfaces, respectively. The producer of a stream can test the fullness of the stream and append tokens to the stream (write) if the stream is not full. The consumer of a stream can test the emptiness of the stream and remove tokens from the stream (destructive read) or duplicate the head token without removing it (nondestructive read, a.k.a. peek) if the stream is not empty. Read, peek, and write operations can be blocking or non-blocking.</p><p>A special token denoting end-of-transaction (EoT) is available to all streams. A process can "close" a stream by writing an EoT token to it, and a process can "open" a stream by reading an EoT token from it. A process can also test if a stream is closed, which is a non-destructive read operation to the stream (eot). An EoT token does not contain any useful data. This is designed deliberately to make it possible to break from a pipelined loop when an EoT is present, for example, in Line 3 of Listing 2. Listing 2 shows an example of how the communication interfaces are used.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.2">Instantiation</head><p>Interface. A parent task can instantiate streams using stream&lt;type,capacity&gt;. For example, stream&lt;Pkt,2&gt; instantiates a stream with capacity 2, and data tokens transmitted using this stream have type Pkt. Tasks are instantiated using task::invoke, with the first argument being the task function and the rest of the arguments being the arguments to the task instance. This is consistent with std::invoke in the C++ standard library. Listing 2. Demonstration of TAPA peek() API and the usage of TAPA transactions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.3">Detached Tasks.</head><p>Sometimes making the termination of the program dependent on each kernel function is overkill. For example, a task function may be purely data-driven and we do not have to terminate it on program termination. In that case, TAPA allows users to detach a task on invocation instead of joining it to the parent through the task().invoke&lt;detach&gt;() API. Such tasks can keep running forever as long as input data are available, and the central controller will not check if it is finished or not to determine the termination of the program.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.4">System Integration</head><p>Interface. TAPA uses a unified system integration interface to further reduce programmers' burden. To offload a kernel to an FPGA accelerator, programmers only need to call the top-level task as a C++ function in the host code. Since TAPA can extract metadata information, e.g., argument type, from the kernel code, TAPA will automatically synthesize proper OpenCL host API calls and emit an implementation of the top-level task C++ function that can set up the runtime environment properly. As a user of TAPA, the programmer can use a single function invocation in the same source code to run software simulation, hardware simulation, and on-board execution, with the only difference of specifying proper kernel binaries.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Asynchronous External Memory Access</head><p>To provide users with more flexibility in accessing the external memory and to reduce the area overhead, we develop the async_mmap API. The async_mmap object represents an AXI channel as five independent streams, as shown in Figure <ref type="figure">6</ref>. To read from an HBM channel, the user only needs to write out the target addresses one-by-one into the read_addr stream. The received data will be directly streamed to the user logic through another stream interface. Figure <ref type="figure">3</ref> shows an example of a sequential read operation. Likewise, to write data out into external memory, the user only needs to push the individual addresses and data to the write_addr channel.</p><p>Contrary to existing HLS solutions that infer burst transactions at the compile time through static analysis, we implement a runtime burst detection mechanism. The burst detector module will inspect the input addresses and merge sequential reads or writes into a longer burst transaction. Table <ref type="table">1</ref> demonstrates the behavior of the burst detector. The input side is always available to accept new input addresses as long as the downstream AXI adapter is not congested. Internally, the burst detector will keep track of the longest stream of sequential addresses it has seen so far. Once a new input address is not consequential to the previous address, the burst detector will conclude the last transaction and produce a burst transaction to the output. Meanwhile, it will update the starting address of the next burst. In the case that the next input address is not available above a threshold, the burst detector will also conclude and issue out the current burst. The burst detector ensures that memory access will be as efficient as inferring burst transactions statically.</p><p>The async_mmap API gives HLS users a similar level of fine-grained control of the RTL level. Users can use one dedicated task to issue read requests and another task to receive the read  1 template &lt;typename T&gt; 2 struct async_mmap { 3 using addr_t = int64_t; using resp_t = uint8_t; 4 tapa::ostream&lt;addr_t&gt; read_addr; 5 tapa::istream&lt;T&gt; read_data; 6 tapa::ostream&lt;addr_t&gt; write_addr; 7 tapa::ostream&lt;T&gt; write_data; 8 tapa::istream&lt;resp_t&gt; write_resp; 9 }; Listing 3. The user interface of the async_mmap API. 1 void load(tapa::async_mmap&lt;Elem&gt;&amp; hbm) { 2 for(int i_rd_req = 0, i_rd_resp = 0; i_rd_resp &lt; n; ) { 3 #pragma HLS pipeline II=1 4 // issue read request 5 if (i_rd_req &lt; n &amp;&amp; !hbm.read_addr.full()) { 6 hbm.read_addr.write(i_rd_req); 7 ++i_rd_req; 8 } 9 10 // receive the read response 11 if (!hbm.read_data.empty()) { 12 Elem elem = hbm.read_data.read(); 13 ++i_rd_resp; 14 } } 15 } Listing 4. Example of reading data with the async_mmap API.</p><p>responses, which is hardly feasible in the common approach that abstracts the external memory as an array.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5">Overall Compilation Steps</head><p>Figure <ref type="figure">1</ref> shows the overall compilation flow. Given an input TAPA program, the tool will compile the top function into RTL and then invoke an existing HLS tool to compile each individual task. For the top function, the tool will analyze and extract how the tasks are interconnected and construct a task graph. The task graph will be used by the floorplan tools to assign each task to one region of the device and determine the pipeline level of each stream. The details of the floorplan process will be discussed in the following sections.</p><p>TAPA: A Scalable Task-parallel Dataflow Programming Framework for Modern FPGAs 63:11 In cycles 0-3, four sequential addresses are consumed, and the detector keeps track the length of the current burst. In cycle 4, the new input address (128) is not consecutive to the last input address <ref type="bibr">(64)</ref>, thus the burst detector put an end to the last burst tracking process and issues a burst transaction on the output side. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">COUPLING HLS WITH COARSE-GRAINED FLOORPLANNING</head><p>In this section, we present our coarse-grained floorplanning scheme that assigns TAPA tasks to different regions of the programmable fabric. We call this TAPA module for floorplanning Auto-Bridge, which is an extension to our prior same-name work <ref type="bibr">[35]</ref>.</p><p>Note that the focus of this work is not on improving floorplanning algorithms; instead, we intend to properly use coarse-grained floorplan information to guide HLS and placement.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Coarse-grained Floorplanning Scheme</head><p>Instead of finding a dedicated region with a detailed aspect ratio for each module, we choose to view the FPGA device as a grid that is formed by the die boundaries and the large IP blocks. These physical barriers split the programmable fabric apart into a series of disjoint slots in the grid where each slot represents a sub-region of the device isolated by die boundaries and IP blocks. Using our coarse-grained floorplanning, we will assign each function of the HLS design to one of these slots.</p><p>For example, for the Xilinx Alveo U250 FPGA, the array of DDR controllers forms a vertical split in the middle column; and there are three horizontal die boundaries. Thus, the device can be viewed as a grid of 8 slots in 2 columns and 4 rows. Similarly, the U280 FPGA can be viewed as a grid of 6 slots in 2 columns and 3 rows.</p><p>In this scheme, each slot contains about 700 BRAM_18Ks, 1500 DSPs, 400K Flip-Flops, and 200K LUTs. Meanwhile, to reduce the resource contention in each slot, we set a maximum utilization ratio for each slot to guarantee enough blank space. Experiments show that such slot sizes are suitable, and HLS has a good handle on the timing quality of the local logic within each slot, as shown in Section 7.  resource utilization ratios for each slot; (4) location constraints such that certain IO modules must be placed nearby certain IP blocks. In addition, we may have constraints that certain vertices must be assigned to the same slot. This is for throughput concerns and will be explained in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Problem Formulation</head><p>Goal: Assign each v &#8712; V to one of the slots such that (1) the resource utilization ratio<ref type="foot">foot_1</ref> of each slot is below the given limit; (2) the cost function is minimized. We choose the total number of slot-crossings as the cost instead of the total estimated wire lengths. Specifically, the cost function is defined as</p><p>where e i j .width is the bitwidth of the FIFO channel connecting v i and v j and module v is assigned to the v.col-th column and the v.row-th row. The physical meaning of the cost function is the sum of the number of slot boundaries that every wire crosses.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Solution</head><p>Our problem is relatively small in size, as the number of tasks in behavior-level task parallel programs (typically less than thousands) is much smaller than the number of gates in a logic netlist. We adopt the main idea of top-down partitioning-based placement algorithms <ref type="bibr">[4,</ref><ref type="bibr">32,</ref><ref type="bibr">57]</ref> to solve our problem. Meanwhile, due to the relatively small problem size, we plan to pursue an exact solution for each partitioning process. Figure <ref type="figure">8</ref> demonstrates the floorplanning of an example design through three iterations of partitioning. The top-down partitioning-based approach starts with the initial state where all modules are assigned to the same slot, iteratively partitions the current slots in half into two child slots, and then assigns the modules into the child slots. Each partitioning involves splitting all of the current slots in half either horizontally or vertically.</p><p>We formulate the partitioning process of each iteration using integer linear programming (ILP). In every partitioning iteration, all current slots need to be divided in half. Since some of the modules in a slot may be tightly connected to modules outside of the slot, ignoring such connections can adversely affect the quality of the assignment. Therefore, our ILP formulation considers the partitioning of all slots together for an exact solution that is possible due to the small problem TAPA: A Scalable Task-parallel Dataflow Programming Framework for Modern FPGAs 63:13</p><p>Table 2. Coordinates of Selected Vertices in Figure 8</p><p>size. Experiments in Section 7 show that our ILP formulation is solvable within a few seconds or minutes for designs of hundreds of modules.</p><p>Performing an N-way partitioning is another potential method. However, compared to our iterative 2-way partitioning, experiments show that it is much slower than iterative 2-way partitioning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ILP Formulation of One Partitioning Iteration.</head><p>The formulation declares a binary decision variable v d for each v to denote whether v is assigned to the left or the right child slot during a vertical partitioning (or to the upper or the lower child slot for a horizontal one). Let R denote the set of all current slots. For each slot r &#8712; R to be divided, we use r v to denote the set of all vertices that r is currently accommodating. To ensure that the child slots have enough resources for all modules assigned to them, the ILP formulation imposes the resource constraint for each child slot r child and for each type of on-chip resource.</p><p>where v ar ea is the resource requirement of v and (r sub ) ar ea represents the available resources in the child slot divided from r .</p><p>To express the cost function that is based on the coordinates of each module, we first need to express the new coordinates (v.row, v.col ) of v based on the previous coordinates ((v.row ) pr ev , (v.col ) pr ev ) and the decision variable v d . For a vertical partitioning, the new coordinates of v will be v.col</p><p>v.row = (v.row ) pr ev .</p><p>And for horizontal partitioning, the new coordinates will be</p><p>v.col = (v.col ) pr ev .</p><p>Finally, the objective is to minimize the total slot-crossing shown in Equation ( <ref type="formula">1</ref>) for each partitioning iteration.</p><p>For the example in Figure <ref type="figure">8</ref>, Table <ref type="table">2</ref> shows the row and col indices of selected vertices in each partitioning iteration.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">FLOORPLAN-AWARE PIPELINING</head><p>Based on the generated floorplan, we aim to pipeline every cross-slot connection to facilitate timing closure. Although HLS has the flexibility to pipeline them to increase the final frequency, the additional latency could potentially lead to a large increase of the execution cycles, which we need to avoid. This section presents our methods to pipeline slot-crossing connections without hurting the overall throughput of the design. We will first focus on pipelining the dataflow designs, then extend the method to other types of HLS design. In Section 5.1, we introduce the approach of pipelining with latency balancing; and Section 5.2 presents the detailed algorithm. In Section 5.3, we present how to utilize the internal computation pattern to construct loop-level dataflow graphs that allow more pipelining opportunities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Pipelining Followed by Latency Balancing for Dataflow Designs</head><p>In our problem, an HLS dataflow design consists of a set of concurrently executed functions communicating through FIFO channels, where each function will be compiled into an RTL module controlled by an FSM <ref type="bibr">[65]</ref>. The rich expressiveness of FSM makes it difficult to statically determine how the additional latency will affect the total execution cycles. Note that our problem is different from other simplified dataflow models such as the Synchronous Data Flow (SDF) <ref type="bibr">[51]</ref> and the Latency Insensitive Theory (LIT) <ref type="bibr">[7]</ref>, where the firing rate of each vertex is fixed. Unlike SDF and LIT, in our problem, each vertex is an FSM and the firing rate is not fixed and can have complex patterns.</p><p>Therefore, we adopt a conservative approach, where we first pipeline all edges that cross slot boundaries, then balance the latency of parallel paths based on the cut-set pipelining <ref type="bibr">[64]</ref>. A cut-set is a set of edges that can be removed from the graph to create two disconnected sub-graphs; and if all edges in a cut-set are of the same direction, then we could add an equal amount of latency to each edge and the throughput of the design will be unaffected. Figure <ref type="figure">9</ref>(a) illustrates the idea. If we need to add one unit of latency to e 13 (marked in red) due to the floorplan results, then we need to find a cut-set that includes e 13 and balance the latency of all other edges in this cut-set (marked in blue).</p><p>Since we can choose different cut-set to balance the same edge, we need to minimize the area overhead. For example, for e 13 , balancing the cut-set 2 in Figure <ref type="figure">9</ref>(b) costs smaller area overhead compared to cut-set 1 in Figure <ref type="figure">9</ref>(a), as the width of e 47 is smaller than that of e 14 . Meanwhile, it is possible that multiple edges can be included in the same cut-set. For example, the edges e 27 and e 37 are both included in the cut-set 3, so we only need to balance the other edges in cut-set 3 once.</p><p>Cut-set pipelining is equivalent to balancing the total added latency of every pair of reconvergent paths <ref type="bibr">[64]</ref>. A path is defined as one or multiple concatenated edges of the same direction; two paths are reconvergent if they have the same source vertex and destination vertex. When there are multiple edges with additional latency from the floorplanning step, we need to find a globally optimal solution that ensures all reconvergent paths have a balanced latency, and the area overhead is minimized.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Latency Balancing Algorithm</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Problem Formulation.</head><p>Given: A graph G (V , E) representing a dataflow design that has already been floorplanned and pipelined. Each vertex v &#8712; V represents a function in the dataflow design and each edge e &#8712; E represents the FIFO channel between functions. Each edge e &#8712; E is associated with e.width representing the bitwidth of the edge. For each edge e, the constant e.lat represents the additional latency inserted to e in the previous pipelining step. We use the integer variable e.balance to denote the number of latency added to e in the current latency balancing step.</p><p>Goal: <ref type="bibr">(1)</ref> For each edge e &#8712; E, compute e.balance such that for any pair of reconvergent paths {p 1 , p 2 }, the total latency on each path is the same:  <ref type="figure">9</ref>. Assume that the edges e 13 , e 37 , and e 27 are pipelined according to some floorplan, and each of them carries 1 unit of inserted latency. Also assume that the bitwidth of e 14 is 2 and all other edges are 1. In the latency balancing step, the optimal solution is adding 2 units of latency to each of e 47 , e 57 , e 67 and 1 unit of latency to e 12 . Note that edges e 27 and e 37 can exist in the same cut-set. and (2) minimize the total area overhead, which is defined as:</p><p>e &#8712;E e.balance &#215; e.width.</p><p>Note that this problem is different from the classic min-cut problem [59] for DAG. One na&#239;ve solution is to find a min-cut for every pipelined edge and increase the latency of the other edges in the cut accordingly. However, this simple method is suboptimal. For example, in Figure <ref type="figure">9</ref>, since edge e 27 and e 37 can be in the same cut-set, we only need to add one unit of latency to the other edges in the cut-set (e.g., e 47 , e 57 , and e 67 ) so all paths are balanced.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Solution.</head><p>We formulate the problem in a restricted form of ILP that can be solved in polynomial time. For each vertex v i , we associate it with an integer variable S i that denotes the maximum latency from pipelining between v i and the sink vertex of the graph. In other words, given two vertices v x and v y , (S x -S y ) represents the maximum latency among all paths between the two vertices. Note that we only consider the latency on edges due to pipelining.</p><p>For each edge e i j , we have S i &#8805; S j + e i j .lat .</p><p>According to our definition, the additional balancing latency added to edge e i j in this step can be expressed as e i j .balance = (S i -S je i j .lat ), since we want every path from v i to v j have the same latency. The optimization goal is to minimize the total area overhead, i.e., the weighted sum of the additional depth on each edge: minimize e i j &#8712;E e i j .balance &#215; e i j .width.</p><p>For example, assume that there are two paths from v 1 to v 2 where path p 1 has 3 units of latency from pipelining while p 2 has 1 unit. Thus, from our formulation, we will select the edge(s) on p 2 and add 2 additional units of latency to balance the total latency of p 1 and p 2 so the area overhead is minimized.</p><p>Our formulation is essentially a system of differential constraints (SDC), in which all constraints are in the form of x ix j &#8804; b i j , where b i j is a constant and x i , x j are variables. Because of</p><p>63:16 L. Guo et al.  this restrictive form of constraints, we can solve SDC as a linear programming problem while the solutions are guaranteed to be integers. As a result, it can be solved in polynomial time <ref type="bibr">[26,</ref><ref type="bibr">52]</ref>.</p><p>If the SDC formulation does not have a solution, then there must be a dependency cycle in the dataflow graph <ref type="bibr">[26]</ref>. This means that at least one of the edges in the dependency cycle is pipelined based on the floorplan. In this situation, we will feedback to the floorplanner to constrain those vertices into the same region and then re-generate a new floorplan.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">Efficient Pipelining Implementation</head><p>Figure <ref type="figure">10</ref> shows how we add pipelining to a FIFO-based connection. We adopt FIFOs that assert their full pin before the storage actually runs out, so we could directly register the interface signals without affecting the functionality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">OPTIMIZATION FOR HBM DEVICES</head><p>As will be shown in our evaluation section, the techniques from previous sections will already be effective for a significant timing improvement on DDR-based FPGAs. However, more optimization techniques are needed to squeeze the best performance out of the state-of-the-art HBM-based FPGAs. In this section, we present three major techniques tailored for the unique architecture of HBM-based FPGAs where a large set of independent data channels are clustered closely at the edge of the device.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Reduce BRAM Usage with async_mmap</head><p>First, we present a system-level optimization to reduce the resource consumption near the HBM blocks by using the async_mmap API presented in Section 3.4. When interacting with the AXI interface, existing HLS tools will buffer the entire burst transactions using on-chip memories. For a 512-bit AXI interface, the AXI buffers generated by Vitis HLS costs 15 BRAM_18K each for the read channel and the write channel. While this is trivial for conventional DDR-based FPGAs where only a few external DDRs are available, such BRAM overhead becomes a huge problem for HBM devices. To use all 32 HBM channels, the AXI buffers alone take away more than 900 BRAM_18Ks, which accounts for more than 70% of the BRAM resources in the bottom SLR.</p><p>However, with the async_mmap interface, we no longer need to set aside a large buffer to accommodate the data in AXI burst transactions, because the flow control mechanism is explicitly included in the user code (Figure <ref type="figure">4</ref>). Table <ref type="table">3</ref> shows our resource reduction for just one HBM channel.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Automatic HBM Channel Binding</head><p>In the current FPGA HBM architecture, the HBM is divided into 32 channels are physically bundled into eight groups, and each group contains four adjacent channels joined by a built-in 4 &#215; 4 crossbar. The crossbar provides full connectivity within the group. Meanwhile, each AXI interface at the user side can still access any HBM channels outside its group. The data will sequentially traverse through each of the lateral connections until it reaches the crossbar connecting to the target channel, thus inter-group accesses will come with longer latency and potentially less bandwidth due to data link sharing. Therefore, the binding of logical buffers and physical HBM channels will affect the design in two ways:</p><p>-Since intra-group access is more efficient compared to inter-group accesses, an inferior binding will negatively affect the available bandwidth. -As the HBM channels are hardened to fixed locations, the binding also affects the placement and routing of the logic that connect to HBM. Thus, an unoptimized binding may cause local congestion in the programmable logic nearby the HBM channels.</p><p>Existing CAD tools require that users explicitly specify the mapping of all HBM channels, which requires users to master low-level architecture details. Also, since the binding does not affect the correctness of the design, users are often unaware of suboptimal choices.</p><p>To alleviate the problem, we propose a semi-automated solution. We observe that very often the design only involves intra-group HBM accesses. In this case, the binding decision does not affect the HBM bandwidth and latency and only impacts the placement and routing of nearby logic. Therefore, we implement an API where users could specify the partial binding of channels, or none at all if desired, and let TAPA automatically determine the binding for the rest.</p><p>Specifically, we incorporate the HBM binding process into our floorplanning step. We treat the number of available HBM channels as another type of resource for the slots. Therefore, slots that are directly adjacent to HBM blocks will be treated as having the corresponding number of HBM channels, while other slots will have zero available HBM channel resources. Meanwhile, each task that directly interacts with the HBM channel is treated as requiring one unit of HBM channel resources, and other tasks will be regarded as not requiring HBM resources.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3">Generating Multiple Floorplan Candidates</head><p>By default, TAPA will only generate one floorplan solution where we will prioritize a balanced distribution of logic and then accordingly pipeline the inter-slot connections. However, due to the severe local congestion around the bottom die in an HBM device, we need to explore the different tradeoffs between logic resource usage and routing resource usage, especially die-crossing wires. One floorplan solution may use fewer logic resources in the bottom die but require more diecrossing wires as logic are pushed to the upper regions; another solution may have the opposite effect. We observe that very often it is unpredictable which factor is more important for a given design until the routing process is done. Note that each different floorplan solution comes with corresponding pipelining schemes that best suit the floorplan results.</p><p>Instead of generating only one floorplan solution, we can generate a set of Pareto-optimal points and run physical design concurrently to explore the best results. In our formulation of the floorplan problem, we have a parameter to control the maximal logic resource utilization of each island. Reducing this parameter will reduce local logic resource usage and increase global routing resource usage and vice versa. Therefore, we sweep through a range of this parameter to generate a set of slightly different floorplans and implement them in parallel to achieve the highest frequency. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">EXPERIMENTS 7.1 Implementation Details</head><p>TAPA is implemented in C++ and Python. We implement our prototype to interface with the CAD flow for AMD/Xilinx FPGAs, including Vitis HLS, Vivado, and Vitis (2021.2). We use the Python MIP package <ref type="bibr">[72]</ref> coupled with Gurobi <ref type="bibr">[39]</ref> to solve the various ILP problems introduced in previous sections. We generate tcl constraint files to be used by Vivado to enforce our high-level floorplanning scheme.</p><p>Meanwhile, we turn off the hierarchy rebuild process during RTL synthesis <ref type="bibr">[89]</ref> to prevent the RTL synthesis tool from introducing additional wire connections between RTL modules. The hierarchy rebuild step first flattens the hierarchy of the RTL design and then tries to rebuild the hierarchy. As a result, hierarchy rebuild may create unpredictable new connections between modules. As a result, if two modules are floorplanned far apart, then these additional wires introduced during RTL synthesis will be under-pipelined, as they are unseen during HLS compilation. Note that disabling this feature may lead to slight differences in the final resource utilization.</p><p>We test out designs on the Xilinx Alveo U250 FPGA<ref type="foot">foot_2</ref> with 4 DRAMs and the Xilinx Alveo U280 FPGA 3 with HBM. As the DDR controllers are distributed in the middle vertical column while the HBM controller lies at the bottom row, these two FPGA architectures present different challenges to the CAD tools. Thus, it is worthwhile to test them separately.</p><p>To run our framework, users first specify how they want to divide the device. By default, we divide the U250 FPGA into a 2-column &#215; 4-row grid and the U280 FPGA into a 2-column &#215; 3-row grid, matching the block diagram of these two architectures shown in Figure <ref type="figure">2</ref>. To control the floorplanning, users can specify the maximum resource utilization ratio of each slot. The resource utilization is based on the estimation by HLS. Users can also specify how many levels of pipelining to add based on the number of boundary crossings. By default, for each boundary crossing, we add two levels of pipelining to the connection. The processed design is integrated with the Xilinx Vitis infrastructure to communicate with the host.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2">Benchmarks</head><p>We use two groups of benchmarks to demonstrate the proposed methodologies. We first include six benchmarks that are originally used in AutoBridge <ref type="bibr">[35]</ref> to showcase the frequency improvement from co-optimization of HLS and physical design. AutoBridge uses six representative benchmark designs with different topologies and changes the parameter of the benchmarks to generate a set of designs with varying sizes on both the U250 and the U280 board. The six designs are all large-scale designs implemented and optimized by HLS experts. Figure <ref type="figure">11</ref> shows the topology of the benchmarks. Note that even for those benchmarks that seem regular (e.g., CNN), the location constraints from peripheral IPs can highly distort their physical layouts.</p><p>-The stencil designs created by the SODA <ref type="bibr">[17]</ref> compiler are a set of kernels in linear topologies. -The genome sequencing design <ref type="bibr">[37]</ref> performing the Minimap2 overlapping algorithm <ref type="bibr">[53]</ref> has processing elements (PE) in broadcast topology. This benchmark is based on sharedmemory communication and all other benchmarks are dataflow designs. -The CNN accelerators created by the PolySA <ref type="bibr">[24]</ref> compiler are in a grid topology. -The HBM graph processing design <ref type="bibr">[18]</ref> performs the page rank algorithm. It features eight sets of processing units and one central controller. This design also contains dependency cycles if viewed at the granularity of computing kernels. -The HBM bucket sort design adapted from Reference <ref type="bibr">[71]</ref>, which includes eight parallel processing lanes and two fully connected layers. -The Gaussian elimination designs created by AutoSA <ref type="bibr">[83]</ref> are in triangle topologies.</p><p>In addition, we include three additional benchmarks that use a large number of HBM channels to demonstrate the newly added HBM-specific optimizations. All of the three additional benchmarks will still fail to route with the original AutoBridge. However, our latest optimizations enable them to route successfully with high frequencies.</p><p>-The Scalable and Automatic Stencil Acceleration Framework (SASA) <ref type="bibr">[80]</ref> accelerators where one version uses 24 channels, and the other one uses 27 channels. Compared to the SODA stencil accelerator used in the original AutoBridge paper, the SASA accelerator also has a much more complicated topology. -The HBM sparse matrix-matrix multiply (SpMM) accelerator <ref type="bibr">[76]</ref> that uses 29 HBM channels. -The Sparse matrix-vector multiply (SpMV) accelerators <ref type="bibr">[75]</ref> where one version uses <ref type="bibr">20</ref> HBM channels and another version uses 28 HBM channels.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.3">Original Evaluation of AutoBridge</head><p>By varying the size of the benchmarks, in total, we have tested the implementation of 43 designs with different configurations. Among them, 16 designs failed in routing or placement with the baseline CAD flow, compared AutoBridge, which succeeds in routing all of them and achieves an average of 274 MHz. For the other 27 designs, we improve the final frequency from 234 MHz to 311 MHz, on average. In general, we find that AutoBridge is effective for designs that use up to about 75% of the available resources. We execute our framework on an Intel Xeon CPU running at 2.2 GHz. Both the baseline designs and optimized ones are implemented using Vivado with the  highest optimization level, with a target operating frequency setting of 300 MHz. The final design checkpoint files of all experiments are available in our open-sourced repository.</p><p>In some experiments, we may find that the optimized versions have even slightly smaller resource consumption. Possible reasons are that we adopt a different FIFO template and disable the hierarchy rebuild step during RTL synthesis. Also, as the optimization leads to very different placement results compared to those of the original version, we expect different optimization strategies will be adopted by the physical design tools. The correctness of the code is verified by cycle-accurate simulation and on-board execution.</p><p>Next, we present the detailed results of each benchmark.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>SODA Stencil Computation.</head><p>For the stencil computing design, the kernels are connected in a chain format through FIFO channels. By adjusting the number of kernels, we can vary the total size of the design. We test anywhere from one kernel up to eight kernels, and Figure <ref type="figure">12</ref> shows final frequency of the eight design configurations on both U250 and U280 FPGAs. In the original flow, many design configurations fail in routing due to routing resource conflicts. Those that are routed successfully still achieve relatively low frequencies. In comparison, with the help of AutoBridge, all design configurations are routed successfully. On average, we improve the timing from 86 MHz to 266 MHz on the U280 FPGA and from 69 MHz to 273 MHz on the U250 FPGA.</p><p>Starting from the seven-kernel design, we observe a frequency decrease on the U280 FPGA. This is because each kernel of the design is very large and uses about half the resources of a slot; thus, starting from the seven-kernel design on the relatively small U280, two kernels have to be squeezed into one slot, which will cause more severe local routing congestion. Based on this phenomenon, we recommend that users avoid designing very large kernels and instead split the functionality into multiple functions to allow the tool more flexibility in floorplanning the design.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CNN Accelerator.</head><p>The CNN accelerator consists of identical PEs in a regular grid topology. We adjust the size of the grid from a 2 &#215; 13 array up to a 16 &#215; 13 array to test the robustness of AutoBridge. Figure <ref type="figure">13</ref> shows the result on both U250 and U280 FPGAs.</p><p>Although the regular 2-dimensional grid structure is presumed to be FPGA-friendly, the actual implementation results from the original tool flow is not satisfying. With the original tool flow, even small-size designs are bounded at around 220 MHz when targeting U250. Designs of larger sizes will fail in placement <ref type="bibr">(13 &#215; 12)</ref> or routing (13 &#215; 10 and 13 &#215; 14). Although the final frequency is high when the design is small for the original tool flow targeting U280, the timing quality is steadily dropping as the designs become larger.</p><p>TAPA: A Scalable Task-parallel Dataflow Programming Framework for Modern FPGAs 63:21 &#215; 14 52.10 51.92 43.28 42.93 46.02 46.00 59.35 59.35 156,715 156,725 13 &#215; 16 57.82 57.61 48.13 47.70 50.07 50.06 67.81 67.81 174,377 174,396</p><p>The design point of 13 &#215; 12 failed placement and 13 &#215; 10 and 13 &#215; 14 failed routing with the original tool flow. In contrast, AutoBridge improves from 140 MHz to 316 MHz on U250, on average, and from 214 MHz to 328 MHz on U280. Table 4 lists the resource consumption and cycle counts of the experiments on U250. Statistics on U280 are similar and are omitted here.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Gaussian elimination.</head><p>The PEs in this design form a triangle topology. We adjust the size of the triangle and test on both U250 and U280. Table <ref type="table">5</ref> shows the results. On average, we improve the frequency from 245 MHz to 334 MHz on U250 and from 223 MHz to 335 MHz on U280.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>HBM Bucket Sort.</head><p>The bucket sort design has two complex fully connected layers. Each fully connected layer involves an 8 &#215; 8 crossbar of FIFO channels, with each FIFO channel being 256-bit wide. AutoBridge pipelines the FIFO channels to alleviate the routing congestion. Table <ref type="table">6</ref> shows the frequency gain, where we improve from 255 MHz to 320 MHz on U280. As the design requires 16 external memory ports and U250 only has 4 available, the test for this design is limited to U280 only.</p><p>Because the original source code has enforced a BRAM-based implementation for some small FIFOs, which results in wasted BRAM resources, the results of AutoBridge have slightly lower BRAM and flip-flop consumption than the original implementation. In comparison, we use a different FIFO template that chooses the implementation style (BRAM-based or shift-register-based) based on the area of the FIFO. Cycle-accurate simulation has proven the correct functionality of our optimized implementation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>HBM Page Rank.</head><p>This design incorporates eight sets of processing units, each interfacing with two HBM ports. There are also centralized control units that exchange control information with five HBM ports.</p><p>63:22 L. Guo et al.    Table <ref type="table">7</ref> shows the experiment results and we improve the final frequency from 136 MHz to 210 MHz on U280.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.4">HBM-Specific Optimizations</head><p>In this section, we use four real-world designs from premium academic conferences or journals to demonstrate the effects of our HBM-specific optimizations. We select those designs, as they use a large number of HBM channels, which brings about a serious timing closure challenge.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>HBM SpMM and SpMV Accelerators.</head><p>The SpMM and the SpMV accelerators leverage the async_mmap API, automatic HBM channel binding, die-crossing wire adjusting, and multi-floorplan generation to achieve the best performance. We implement two versions of the SpMV accelerator, SpMV_A24 and SpMV_A16, with different numbers of parallel processing elements. We report the user clock and HBM clock frequencies and the resource utilization in Table <ref type="table">8</ref>. We have improved both the user clock and the HBM clock frequencies for the three designs. Especially for SpMV_A24, we have improved the user clock frequency from 193 MHz to 283 MHz and the HBM clock frequency from 430 MHz to 450 MHz. With the async_mmap, we significantly reduced BRAM utilization-for SpMM and SpMV_A24, we reduced 10% of the total BRAM utilization.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>HBM Stencil Accelerators by SASA.</head><p>The SASA design incorporates the async_mmap API, automatic HBM channel binding, diecrossing wire adjusting, and floorplan candidate generation to push the user clock frequency above TAPA: A Scalable Task-parallel Dataflow Programming Framework for Modern FPGAs 63:23  225 MHz and the HBM clock frequency to 450 MHz, which enables the accelerator to fully utilize the HBM bandwidth. For stencil algorithms that have a low number of iterations, SASA will leverage efficient spatial parallelism where each kernel read one tile of input data and additional halo data from neighboring tiles at the start. Then, each kernel performs the computation for all iterations (if any) without synchronization. Each kernel works in a streaming pattern and uses at least two HBM banks to store the input and output. The original design based on mmap fails to meet the frequency requirement. With the async_mmap API, we are able to significantly reduce the BRAM utilization. With all optimizations, the two selected designs achieve 241 MHz and 250 MHz, respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Results of Multi-floorplan Generation</head><p>For HBM designs that are sensitive to logic resource utilization and routing resource utilization at the same time, we generate a set of Pareto-optimal floorplanning and implement all of them to explore the potentially best results. Table <ref type="table">10</ref> shows the corresponding achievable frequency. The number of generated floorplan candidates is related to the granularity of the design. Designs with larger tasks have less flexibility in floorplanning, thus there are fewer points on the Pareto-optimal curve. It remains as future work to automatically split large tasks and fuse small tasks to better facilitate the floorplan process.</p><p>As can be seen, even with the same set of optimization techniques, slightly different floorplanning may lead to non-trivial variation in the final achievable frequency. At this stage, we treat the downstream tools as a black box, so we implement all generated floorplan schemes in parallel to search for the best results. How to better predict the final frequency and skip unpromising floorplans in an early stage remains as future work.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.5">Control Experiments</head><p>First, we test whether the frequency gain comes from the combination of pipelining and HLSfloorplanning or simply pipelining alone. To do this, we set a control group where we perform floorplanning and pipelining as usual, but we do not pass the floorplan constraints to the physical design tools. The blue curve with triangle markers in Figure <ref type="figure">15</ref> shows the results. As can be seen, the control group has a lower frequency than the original design for small sizes and has limited improvements over the original designs for large sizes. In all experiments, the group with both pipelining and floorplan constraints (green curve with crossing markers) has the highest frequency. This experiment proves that the frequency gain is not simply a result of more pipelining.</p><p>Meanwhile, if we only do floorplanning without pipelining, then obviously the frequency will be much degraded, as visualized by Figure <ref type="figure">3</ref>.</p><p>Second, we test the effectiveness of setting a slot boundary based on the DDR controllers. We run a set of experiments where we only divide the FPGA into four slots based on the die boundaries, minus the division in the middle column. The yellow curve with diamond markers in Figure <ref type="figure">15</ref> shows the results. As can be seen, it achieves lower frequency compared to our default eight-slot scheme.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.6">Scalability</head><p>To show that the tool works well on designs with large numbers of small functions, we utilize the CNN experiments to test the scalability of our algorithms, as the CNN designs have the most vertices (HLS functions) and edges. Table <ref type="table">11</ref> lists The compile time overhead for the floorplanning and the latency balancing when using Gurobi as the ILP solver. <ref type="foot">4</ref> For the largest CNN accelerator that has 493 modules and 925 FIFO connections, the floorplan step only takes around 20 seconds and the latency balancing step takes 0.03 s. Usually, FPGA designs are not likely to have this many modules and connections <ref type="bibr">[43,</ref><ref type="bibr">93]</ref>, and our method is fast enough.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">RELATED WORK</head><p>Layout-aware HLS Optimization. Previous works have studied how to couple the physical design process with HLS in a fine-grained manner. Zheng et al. <ref type="bibr">[98]</ref> propose to iteratively run placement and routing for fine-grained calibration of the delay estimation of wires. The long runtime of placement and routing prohibits their methods from benefiting large-scale designs, and their experiments are all based on small examples (1,000 s of registers and 10 s of DSPs in their experiments). Cong et al. <ref type="bibr">[23]</ref> presented placement-driven scheduling and binding for multi-cycle communications in an island-style reconfigurable architecture. Xu et al. <ref type="bibr">[94]</ref> proposed to predict a register-level floorplan to facilitate the binding process. Some commercial HLS tools <ref type="bibr">[6,</ref><ref type="bibr">78]</ref> have TAPA: A Scalable Task-parallel Dataflow Programming Framework for Modern FPGAs 63:25 utilized the results of logic synthesis to calibrate HLS delay estimation, but they do not consider the interconnect delays. Chen et al. <ref type="bibr">[9]</ref> propose implementing HLS as a sub-routine to adjust the delay/power/variability/area of the circuit modules during the physical planning process across different IC layers. They report a timing improvement of 8% on synthetic designs while we get almost 2&#215; frequency improvement. Kim et al. <ref type="bibr">[48]</ref> propose to combine architectural synthesis with placement under distributed-register architecture to minimize the system latency. Stammermann et al. <ref type="bibr">[77]</ref> proposed methods to simultaneously perform floorplanning and functional unit binding to reduce power on interconnects. The previous approaches share the common aspect of focusing on the fine-grained interaction between physical design and upstream synthesis, where individual operators and the associated wires and registers are all involved during the delay prediction and iterative pipeline cooptimization. While such a fine-grained method can be effective on relatively small designs and FPGA devices, it is too expensive (if not infeasible) for today's large designs targeting multi-die FPGAs, where each implementation iteration may take days to complete.</p><p>In contrast, we focus on a coarse-grained approach that only pipelines the channels that span long distances and guides the detailed placement.</p><p>Other works have studied methods to predict delay estimation at the behavior level. Guo et al. <ref type="bibr">[36]</ref> proposed to calibrate the estimated delay for operators with large broadcast factors by pre-characterizing benchmarks with different broadcast factors. Tan et al. <ref type="bibr">[79]</ref> showed that the delay prediction of logic operations (e.g., AND, OR, NOT) by HLS tools is too conservative. Therefore, they consider the technology mapping for logic operations. These works mainly target local operators and have limited effects on global interconnects. Zhao et al. <ref type="bibr">[97]</ref> used machine learning to predict how manual pragmas affect routing congestion.</p><p>In addition, Cong et al. <ref type="bibr">[25]</ref> presented tools to allow users to insert additional buffers to the designated datapath. Chen et al. <ref type="bibr">[10]</ref> proposed to add additional registers to the pipelined datapath during HLS synthesis based on the profiling results on the CHStone benchmark. Reference <ref type="bibr">[96]</ref> proposes to generate floorplanning constraints only for systolic array designs, and their method does not consider the interaction with peripheral IPs such as DDR controllers. In comparison, our work is fully automated for general designs, and our register insertion is accurate due to HLSfloorplan co-design. Optimization for Multi-die FPGAs. To adapt to multi-die FPGAs, previous works have studied how to partition the entire design or memories among different dies <ref type="bibr">[15,</ref><ref type="bibr">40,</ref><ref type="bibr">47,</ref><ref type="bibr">58,</ref><ref type="bibr">62,</ref><ref type="bibr">70,</ref><ref type="bibr">82]</ref>. These methods are all based on RTL inputs, thus the partition method must observe the cycle-accurate specification. References <ref type="bibr">[40,</ref><ref type="bibr">62]</ref> try to modify the cost function of placement to reduce die-crossing. This will lead to designs confined in fewer dies with a higher level of local congestion. Zha et al. <ref type="bibr">[95]</ref> propose methods to virtualize the FPGA and let different applications execute at different partitions. Xiao et al. <ref type="bibr">[87,</ref><ref type="bibr">88]</ref> propose methods to split the placement and routing of different parts of the design through dynamic reconfiguration. Floorplanning Algorithms. Floorplanning has been extensively studied <ref type="bibr">[2,</ref><ref type="bibr">3,</ref><ref type="bibr">14,</ref><ref type="bibr">61]</ref>. Conventionally, floorplanning consists of (1) feasible topology generation and (2) determining the aspect ratios for goals such as minimal total wire length. In the existing FPGA CAD flows, the floorplanning step works on RTL input. In contrast, we propose to perform coarse-grained floorplanning during the HLS step to help gain layout information for the HLS tool. Similar to References <ref type="bibr">[49,</ref><ref type="bibr">50,</ref><ref type="bibr">60]</ref>, our algorithm adopts the idea of the partitioning-based approach. As our problem size is relatively small, we use ILP for each partitioning. Throughput Analysis of Dataflow Designs. Various dataflow models have been proposed in other literature, such as the Kahn Process Network (KPN) <ref type="bibr">[34]</ref>, SDF <ref type="bibr">[51]</ref>, among many others. The more simplified the model is, the more accurately we can analyze its throughput. In the SDF model, it is restricted that the number of data produced or consumed by a process for each firing is fixed and known. Therefore, it is possible to analytically compute the influence of additional latency on throughput <ref type="bibr">[33]</ref>. The LIT <ref type="bibr">[1,</ref><ref type="bibr">8,</ref><ref type="bibr">22,</ref><ref type="bibr">55,</ref><ref type="bibr">56]</ref> also enforces similar restrictions as SDF. Reference <ref type="bibr">[81]</ref> proposes methods to insert delays when composing IP blocks of different latency. Reference <ref type="bibr">[45]</ref> studies the buffer placement problem in dataflow circuits <ref type="bibr">[11]</ref><ref type="bibr">[12]</ref><ref type="bibr">[13]</ref><ref type="bibr">44]</ref>. Other works have studied how to map dataflow programs to domain-specific coarse-grained reconfigurable architectures <ref type="bibr">[27,</ref><ref type="bibr">28,</ref><ref type="bibr">85,</ref><ref type="bibr">86]</ref>.</p><p>In our scenario, each function will be compiled into an FSM that can be arbitrarily complex, thus it is difficult to quantitatively analyze the effect of the added latency on the total execution cycles. We adopt a conservative approach to balance the added latency on all reconvergent paths.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9">FUTURE WORK</head><p>While TAPA has already improved significantly on the expressiveness and timing closure, there is a myriad of opportunities to further advance the tool. We list several challenges that we aim to address in the future.</p><p>-Reduce the compile time by integrating RapidStream <ref type="bibr">[38]</ref>. RapidStream builds on top of the idea of HLS-floorplanning co-optimization, and it further splits the design for parallel placement and routing. When tested on the Xilinx U250 FPGA with a set of realistic HLS designs, RapidStream achieves a 5-7&#215; reduction in compile time and up to 1.3&#215; increase in frequency when compared to a commercial-off-the-shelf toolchain. We are in the progress of integrating RapidStream with TAPA. -Support larger tasks that do not fit in a programmable slot. Currently, we intend to report a warning to the users and instruct them to partition it into smaller ones, as we vision that the task of refactoring a task into smaller ones is significantly simpler than tuning the frequency. However, future extensions may be interested in automating this process. -Support more flexible inter-task communication patterns. Currently, TAPA tasks can only communicate with each other through streams. We are extending the infrastructure to support buffer-based channels between tasks for richer expressiveness. -Task-level compiler optimization. As of now, TAPA delegates the compilation of each task to existing HLS tools and does not perform inter-task optimizations. This limitation requires that users come up with a good partitioning of the application into tasks of suitable sizes. We aim to add additional task-level optimization such as task splitting, task fusion, task hierarchy rebuild, and so on, to further co-optimize the task hierarchy and the floorplanning process. -Support designs with a hybrid of RTL and HLS tasks. The floorplan-guided pipeline methodology could apply to RTL tasks as long as they adhere to pre-defined latency-insensitive interfaces. Although we have explored applying the technique to RTL projects <ref type="bibr">[68,</ref><ref type="bibr">69]</ref>, more efforts are needed to provide an automated solution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="10">CONCLUSIONS</head><p>In this article, we present the TAPA framework, an efficient task-parallel programming tool for modern FPGAs. TAPA includes a set of convenient APIs to increase design efficiency. In addition, we tackle the challenge of high-frequency HLS design on multi-die FPGAs by coupling floorplanning and pipelining to effectively insert registers on the long cross-die interconnects. We present a set of optimizations specifically tailored for HBM devices, including automatic HBM port binding, floorplan solution space exploration, and a customized programming API to minimize the area overhead of HBM IO modules. Our framework, TAPA, interfaces with the commercial FPGA design tool flow. It improves the average frequency of 43 designs from 147 MHz to 297 MHz with a negligible area overhead. TAPA has been used in multiple projects to improve the design efficiency and final frequency <ref type="bibr">[16,</ref><ref type="bibr">19,</ref><ref type="bibr">75,</ref><ref type="bibr">76,</ref><ref type="bibr">83]</ref>.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0"><p>ACM Transactions on Reconfigurable Technology and Systems, Vol. 16, No. 4, Article 63. Pub. date: December 2023.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_1"><p>Based on the estimation of resource utilization by HLS. We can increase the accuracy of the area estimation by optionally running logic synthesis of each task in parallel.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_2"><p>The U250 FPGA contains 5,376 BRAM18K, 12,288 DSP48E,</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_3"><p>3,456K FF and 1,728K LUT.<ref type="bibr">3</ref> The U280 FPGA contains 4,032 BRAM18K, 9,024 DSP48E, 2,607K FF and 434K LUT.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_4"><p>Meanwhile, we observed that many open-source ILP solvers are much slower.</p></note>
		</body>
		</text>
</TEI>
