skip to main content

Title: Implementing NChooseK on IBM Q Quantum Computer Systems
This work contributes a generalized model for quantum computation called NChooseK. NChooseK is based on a single parametrized primitive suitable to express a variety of problems that cannot be solved efficiently using classical computers but may admit an efficient quantum solution. We implement a code generator that, given arbitrary parameters for N and K, generates code suitable for execution on IBM Q quantum hardware. We assess the performance of the code generator, limitations in the size of circuit depth and number of gates, and propose optimizations. We identify future work to improve efficiency and applicability of the NChooseK model.
Authors:
; ; ; ;
Award ID(s):
1917383
Publication Date:
NSF-PAR ID:
10189468
Journal Name:
International Conference on Reversible Computation (RC), Springer LNCS
Volume:
11497
Sponsoring Org:
National Science Foundation
More Like this
  1. Computer scientists and programmers face the difficultly of improving the scalability of their applications while using conventional programming techniques only. As a base-line hypothesis of this paper we assume that an advanced runtime system can be used to take full advantage of the available parallel resources of a machine in order to achieve the highest parallelism possible. In this paper we present the capabilities of HPX - a distributed runtime system for parallel applications of any scale - to achieve the best possible scalability through asynchronous task execution [1]. OP2 is an active library which provides a framework for themore »parallel execution for unstructured grid applications on different multi-core/many-core hardware architectures [2]. OP2 generates code which uses OpenMP for loop parallelization within an application code for both single-threaded and multi-threaded machines. In this work we modify the OP2 code generator to target HPX instead of OpenMP, i.e. port the parallel simulation backend of OP2 to utilize HPX. We compare the performance results of the different parallelization methods using HPX and OpenMP for loop parallelization within the Airfoil application. The results of strong scaling and weak scaling tests for the Airfoil application on one node with up to 32 threads are presented. Using HPX for parallelization of OP2 gives an improvement in performance by 5%-21%. By modifying the OP2 code generator to use HPX's parallel algorithms, we observe scaling improvements by about 5% as compared to OpenMP. To fully exploit the potential of HPX, we adapted the OP2 API to expose a future and dataflow based programming model and applied this technique for parallelizing the same Airfoil application. We show that the dataflow oriented programming model, which automatically creates an execution tree representing the algorithmic data dependencies of our application, improves the overall scaling results by about 21% compared to OpenMP. Our results show the advantage of using the asynchronous programming model implemented by HPX.« less
  2. Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications – key derivation from biometrics and physical unclonable functions – which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encodingmore »of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable.« less
  3. Abstract

    We study the effectiveness of quantum error correction against coherent noise. Coherent errors (for example, unitary noise) can interfere constructively, so that in some cases the average infidelity of a quantum circuit subjected to coherent errors may increase quadratically with the circuit size; in contrast, when errors are incoherent (for example, depolarizing noise), the average infidelity increases at worst linearly with circuit size. We consider the performance of quantum stabilizer codes against a noise model in which a unitary rotation is applied to each qubit, where the axes and angles of rotation are nearly the same for all qubits.more »In particular, we show that for the toric code subject to such independent coherent noise, and for minimal-weight decoding, the logical channel after error correction becomes increasingly incoherent as the length of the code increases, provided the noise strength decays inversely with the code distance. A similar conclusion holds for weakly correlated coherent noise. Our methods can also be used for analyzing the performance of other codes and fault-tolerant protocols against coherent noise. However, our result does not show that the coherence of the logical channel is suppressed in the more physically relevant case where the noise strength is held constant as the code block grows, and we recount the difficulties that prevented us from extending the result to that case. Nevertheless our work supports the idea that fault-tolerant quantum computing schemes will work effectively against coherent noise, providing encouraging news for quantum hardware builders who worry about the damaging effects of control errors and coherent interactions with the environment.

    « less
  4. We study the problem of inverting a deep generative model with ReLU activations. Inversion corresponds to finding a latent code vector that explains observed measurements as much as possible. In most prior works this is performed by attempting to solve a non-convex optimization problem involving the generator. In this paper we obtain several novel theoretical results for the inversion problem. We show that for the realizable case, single layer inversion can be performed exactly in polynomial time, by solving a linear program. Further, we show that for multiple layers, inversion is NP-hard and the pre-image set can be non-convex. Formore »generative models of arbitrary depth, we show that exact recovery is possible in polynomial time with high probability, if the layers are expanding and the weights are randomly selected. Very recent work analyzed the same problem for gradient descent inversion. Their analysis requires significantly higher expansion (logarithmic in the latent dimension) while our proposed algorithm can provably reconstruct even with constant factor expansion. We also provide provable error bounds for different norms for reconstructing noisy observations. Our empirical validation demonstrates that we obtain better reconstructions when the latent dimension is large.« less
  5. One of the key challenges arising when compilers vectorize loops for today’s SIMD-compatible architectures is to decide if vectorization or interleaving is beneficial. Then, the compiler has to determine the number of instructions to pack together and the interleaving level (stride). Compilers are designed today to use fixed-cost models that are based on heuristics to make vectorization decisions on loops. However, these models are unable to capture the data dependency, the computation graph, or the organization of instructions. Alternatively, software engineers often hand-write the vectorization factors of every loop. This, however, places a huge burden on them, since it requiresmore »prior experience and significantly increases the development time. In this work, we explore a novel approach for handling loop vectorization and propose an end-to-end solution using deep reinforcement learning (RL). We conjecture that deep RL can capture different instructions, dependencies, and data structures to enable learning a sophisticated model that can better predict the actual performance cost and determine the optimal vectorization factors. We develop an end-to-end framework, from code to vectorization, that integrates deep RL in the LLVM compiler. Our proposed framework takes benchmark codes as input and extracts the loop codes. These loop codes are then fed to a loop embedding generator that learns an embedding for these loops. Finally, the learned embeddings are used as input to a Deep RL agent, which dynamically determines the vectorization factors for all the loops. We further extend our framework to support random search, decision trees, supervised neural networks, and nearest-neighbor search. We evaluate our approaches against the currently used LLVM vectorizer and loop polyhedral optimization techniques. Our experiments show 1.29×−4.73× performance speedup compared to baseline and only 3% worse than the brute-force search on a wide range of benchmarks.« less