Compiling high-level quantum programs to machines that are size constrained (i.e. limited number of quantum bits) and time constrained (i.e. limited number of quantum operations) is challenging. In this paper, we present SQUARE (Strategic QUantum Ancilla REuse), a compilation infrastructure that tackles allocation and reclamation of scratch qubits (called ancilla) in modular quantum programs. At its core, SQUARE strategically performs uncomputation to create opportunities for qubit reuse. Current Noisy Intermediate-Scale Quantum (NISQ) computers and forward-looking Fault-Tolerant (FT) quantum computers have fundamentally different constraints such as data locality, instruction parallelism, and communication overhead. Our heuristic-based ancilla-reuse algorithm balances these considerations and fits computations into resource-constrained NISQ or FT quantum machines, throttling parallelism when necessary. To precisely capture the workload of a program, we propose an improved metric, the "active quantum volume," and use this metric to evaluate the effectiveness of our algorithm. Our results show that SQUARE improves the average success rate of NISQ applications by 1.47X. Surprisingly, the additional gates for uncomputation create ancilla with better locality, and result in substantially fewer swap gates and less gate noise overall. SQUARE also achieves an average reduction of 1.5X (and up to 9.6X) in active quantum volume for FT machines.
more »
« less
Graph Neural Network Assisted Quantum Compilation for Qubit Allocation
Quantum computers in the current noisy intermediate-scale quantum (NISQ) era face two major limitations - size and error vulnerability. Although quantum error correction (QEC) methods exist, they are not applicable at the current size of computers, requiring thousands of qubits, while NISQ systems have nearly one hundred at most. One common approach to improve reliability is to adjust the compilation process to create a more reliable final circuit, where the two most critical compilation decisions are the qubit allocation and qubit routing problems. We focus on solving the qubit allocation problem and identifying initial layouts that result in a reduction of error. To identify these layouts, we combine reinforcement learning with a graph neural network (GNN)-based Q-network to process the mesh topology of the quantum computer, known as the backend, and make mapping decisions, creating a Graph Neural Network Assisted Quantum Compilation (GNAQC) strategy. We train the architecture using a set of four backends and six circuits and find that GNAQC improves output fidelity by roughly 12.7% over pre-existing allocation methods.
more »
« less
- Award ID(s):
- 2019511
- PAR ID:
- 10431815
- Date Published:
- Journal Name:
- Proceedings of 33rd Great Lakes Symposium on VLSI (GLSVLSI)
- Volume:
- 1
- Page Range / eLocation ID:
- 415 to 419
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)One of the key challenges in current Noisy Intermediate-Scale Quantum (NISQ) computers is to control a quantum system with high-fidelity quantum gates. There are many reasons a quantum gate can go wrong -- for superconducting transmon qubits in particular, one major source of gate error is the unwanted crosstalk between neighboring qubits due to a phenomenon called frequency crowding. We motivate a systematic approach for understanding and mitigating the crosstalk noise when executing near-term quantum programs on superconducting NISQ computers. We present a general software solution to alleviate frequency crowding by systematically tuning qubit frequencies according to input programs, trading parallelism for higher gate fidelity when necessary. The net result is that our work dramatically improves the crosstalk resilience of tunable-qubit, fixed-coupler hardware, matching or surpassing other more complex architectural designs such as tunable-coupler systems. On NISQ benchmarks, we improve worst-case program success rate by 13.3x on average, compared to existing traditional serialization strategies.more » « less
-
In the current noisy intermediate-scale quantum (NISQ) Era, Quantum Computing faces significant challenges due to noise, which severely restricts the application of computing complex algorithms. Superconducting quantum chips, one of the pioneer quantum computation technologies, introduce additional noise when moving qubits to adjacent locations for operation on designated two-qubit gates. The current compilers rely on decision models that either count the swap gates or multiply the gate errors when choosing swap paths at the routing stage. Our research has unveiled the overlooked situations for error propagations through the circuit, leading to accumulations that may affect the final output. In this paper, we propose Error Propagation-Aware Routing (EPAR), designed to enhance the compilation performance by considering accumulated errors in routing. EPAR’s effectiveness is validated through benchmarks on a 27-qubit machine and two simulated systems with different topologies. The results indicate an average success rate improvement of 10% on both real and simulated heavy hex lattice topologies, along with a 16% enhancement in a mesh topology simulation. These findings underscore the potential of EPAR to advance quantum computing in the NISQ era substantially.more » « less
-
Quantum computing (QC) is a new paradigm offering the potential of exponential speedups over classical computing for certain computational problems. Each additional qubit doubles the size of the computational state space available to a QC algorithm. This exponential scaling underlies QC’s power, but today’s Noisy Intermediate-Scale Quantum (NISQ) devices face significant engineering challenges in scalability. The set of quantum circuits that can be reliably run on NISQ devices is limited by their noisy operations and low qubit counts. This paper introduces CutQC, a scalable hybrid computing approach that combines classical computers and quantum computers to enable evaluation of quantum circuits that cannot be run on classical or quantum computers alone. CutQC cuts large quantum circuits into smaller subcircuits, allowing them to be executed on smaller quantum devices. Classical postprocessing can then reconstruct the output of the original circuit. This approach offers significant runtime speedup compared with the only viable current alternative -- purely classical simulations -- and demonstrates evaluation of quantum circuits that are larger than the limit of QC or classical simulation. Furthermore, in real-system runs, CutQC achieves much higher quantum circuit evaluation fidelity using small prototype quantum computers than the state-of-the-art large NISQ devices achieve. Overall, this hybrid approach allows users to leverage classical and quantum computing resources to evaluate quantum programs far beyond the reach of either one alone.more » « less
-
Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers.more » « less
An official website of the United States government

