Security and reliability are primary concerns in any computing paradigm, including quantum computing. Currently, users can access quantum computers through a cloud-based platform where they can run their programs on a suite of quantum computers. As the quantum computing ecosystem grows in popularity and utility, it is reasonable to expect that more companies including untrusted/less-trusted/unreliable vendors will begin offering quantum computers as hardware-as-a-service at varied price/performance points. Since computing time on quantum hardware is expensive and the access queue could be long, the users will be motivated to use the cheaper and readily available but unreliable/less-trusted hardware. The less-trusted vendors can tamper with the results, providing a sub-optimal solution to the user. For applications such as, critical infrastructure optimization, the inferior solution may have significant socio-political implications. Since quantum computers cannot be simulated in classical computers, users have no way of verifying the computation outcome. In this paper, we address this challenge by modeling adversarial tampering and simulating it's impact on both pure quantum and hybrid quantum-classical workloads. To achieve trustworthy computing in a mixed environment of trusted and untrusted hardware, we propose an equitable distribution of total shots (i.e., repeated executions of quantum programs) across hardware options. On average, we note ≈ 30X and ≈ 1.5X improvement across the pure quantum workloads and a maximum improvement of ≈ 5X for hybrid-classical algorithm in the chosen quality metrics. We also propose an intelligent run adaptive shot distribution heuristic leveraging temporal variation in hardware quality to user's advantage, allowing them to identify tampered/untrustworthy hardware at runtime and allocate more number of shots to the reliable hardware, which results in a maximum improvement of ≈ 190X and ≈ 9X across the pure quantum workloads and an improvement of up to ≈ 2.5X for hybrid-classical algorithm.
more »
« less
Proving Quantum Programs Correct
As quantum computing progresses steadily from theory into practice, programmers will face a common problem: How can they be sure that their code does what they intend it to do? This paper presents encouraging results in the application of mechanized proof to the domain of quantum programming in the context of the SQIR development. It verifies the correctness of a range of a quantum algorithms including Grover’s algorithm and quantum phase estimation, a key component of Shor’s algorithm. In doing so, it aims to highlight both the successes and challenges of formal verification in the quantum context and motivate the theorem proving community to target quantum computing as an application domain.
more »
« less
- Award ID(s):
- 1926585
- PAR ID:
- 10253287
- Date Published:
- Journal Name:
- 12th International Conference on Interactive Theorem Proving
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Modeling many-body quantum systems is widely regarded as one of the most promising applications for near-term noisy quantum computers. However, in the near term, system size limitation will remain a severe barrier for applications in materials science or strongly correlated systems. A promising avenue of research is to combine many-body physics with machine learning for the classification of distinct phases. We present a workflow that synergizes quantum computing, many-body theory, and quantum machine learning (QML) for studying strongly correlated systems. In particular, it can capture a putative quantum phase transition of the stereotypical strongly correlated system, the Hubbard model. Following the recent proposal of the hybrid quantum-classical algorithm for the two-site dynamical mean-field theory (DMFT), we present a modification that allows the self-consistent solution of the single bath site DMFT. The modified algorithm can be generalized for multiple bath sites. This approach is used to generate a database of zero-temperature wavefunctions of the Hubbard model within the DMFT approximation. We then use a QML algorithm to distinguish between the metallic phase and the Mott insulator phase to capture the metal-to-Mott insulator phase transition. We train a recently proposed quantum convolutional neural network (QCNN) and then utilize the QCNN as a quantum classifier to capture the phase transition region. This work provides a recipe for application to other phase transitions in strongly correlated systems and represents an exciting application of small-scale quantum devices realizable with near-term technology.more » « less
-
The emergence of quantum computers as a new computational paradigm has been accompanied by speculation concerning the scope and timeline of their anticipated revolutionary changes. While quantum computing is still in its infancy, the variety of different architectures used to implement quantum computations make it difficult to reliably measure and compare performance. This problem motivates our introduction of SupermarQ, a scalable, hardware-agnostic quantum benchmark suite which uses application-level metrics to measure performance. SupermarQ is the first attempt to systematically apply techniques from classical benchmarking methodology to the quantum domain. We define a set of feature vectors to quantify coverage, select applications from a variety of domains to ensure the suite is representative of real workloads, and collect benchmark results from the IBM, IonQ, and AQT@LBNL platforms. Looking forward, we envision that quantum benchmarking will encompass a large cross-community effort built on open source, constantly evolving benchmark suites. We introduce SupermarQ as an important step in this direction.more » « less
-
Quantum computing technology may soon deliver revolutionary improvements in algorithmic performance, but it is useful only if computed answers are correct. While hardware-level decoherence errors have garnered significant attention, a less recognized obstacle to correctness is that of human programming errors—“bugs.” Techniques familiar to most programmers from the classical domain for avoiding, discovering, and diagnosing bugs do not easily transfer, at scale, to the quantum domain because of its unique characteristics. To address this problem, we have been working to adapt formal methods to quantum programming. With such methods, a programmer writes a mathematical specification alongside the program and semiautomatically proves the program correct with respect to it. The proof’s validity is automatically confirmed—certified—by a “proof assistant.” Formal methods have successfully yielded high-assurance classical software artifacts, and the underlying technology has produced certified proofs of major mathematical theorems. As a demonstration of the feasibility of applying formal methods to quantum programming, we present a formally certified end-to-end implementation of Shor’s prime factorization algorithm, developed as part of a framework for applying the certified approach to general applications. By leveraging our framework, one can significantly reduce the effects of human errors and obtain a high-assurance implementation of large-scale quantum applications in a principled way.more » « less
-
We study the problem of measuring errors in non-trace-preserving quantum operations, with a focus on their impact on quantum computing. We propose an error metric that efficiently provides an upper bound on the trace distance between the normalized output states from imperfect and ideal operations, while remaining compatible with the diamond distance. As a demonstration of its application, we apply our metric in the analysis of a lossy beam splitter and a nondeterministic conditional sign-flip gate, two primary non-trace-preserving operations in the Knill-Laflamme-Milburn protocol. We then turn to the leakage errors of neutral-atom quantum computers, finding that these errors scale worse than previously anticipated, implying a more stringent fault-tolerant threshold. We also assess the quantum Zeno gate's error using our metric. In a broader context, we discuss the potential of our metric to analyze general postselected protocols, where it can be employed to study error propagation and estimate thresholds in fault-tolerant quantum computing. The results highlight the critical role of our proposed error metric in understanding and addressing challenges in practical quantum information processing.more » « less
An official website of the United States government

