Abstract Quantum neuromorphic computing (QNC) is a sub-field of quantum machine learning (QML) that capitalizes on inherent system dynamics. As a result, QNC can run on contemporary, noisy quantum hardware and is poised to realize challenging algorithms in the near term. One key issue in QNC is the characterization of the requisite dynamics for ensuring expressive quantum neuromorphic computation. We address this issue by proposing a building block for QNC architectures, what we call quantum perceptrons (QPs). Our proposed QPs compute based on the analog dynamics of interacting qubits with tunable coupling constants. We show that QPs are, with restricted resources, a quantum equivalent to the classical perceptron, a simple mathematical model for a neuron that is the building block of various machine learning architectures. \framing{Moreover, we show that QPs are theoretically capable of producing any unitary operation.} Thus, QPs are computationally more expressive than their classical counterparts. As a result, QNC architectures built our of QPs are, theoretically, universal. We introduce a technique for mitigating barren plateaus in QPs called entanglement thinning. We demonstrate QPs' effectiveness by applying them to numerous QML problems, including calculating the inner products between quantum states, entanglement witnessing, and quantum metrology. Finally, we discuss potential implementations of QPs and how they can be used to build more complex QNC architectures.
more »
« less
Barren plateaus from learning scramblers with local cost functions
A<sc>bstract</sc> The existence of barren plateaus has recently revealed new training challenges in quantum machine learning (QML). Uncovering the mechanisms behind barren plateaus is essential in understanding the scope of problems that QML can efficiently tackle. Barren plateaus have recently been shown to exist when learning global properties of random unitaries, which is relevant when learning black hole dynamics. Establishing whether local cost functions can circumvent these barren plateaus is pertinent if we hope to apply QML to quantum many-body systems. We prove a no-go theorem showing that local cost functions encounter barren plateaus in learning random unitary properties.
more »
« less
- Award ID(s):
- 2037687
- PAR ID:
- 10482940
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- Journal of High Energy Physics
- Volume:
- 2023
- Issue:
- 1
- ISSN:
- 1029-8479
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Despite the great promise of quantum machine learning models, there are several challenges one must overcome before unlocking their full potential. For instance, models based on quantum neural networks (QNNs) can suffer from excessive local minima and barren plateaus in their training landscapes. Recently, the nascent field of geometric quantum machine learning (GQML) has emerged as a potential solution to some of those issues. The key insight of GQML is that one should design architectures, such as equivariant QNNs, encoding the symmetries of the problem at hand. Here, we focus on problems with permutation symmetry (i.e., symmetry groupSn), and show how to buildSn-equivariant QNNs We provide an analytical study of their performance, proving that they do not suffer from barren plateaus, quickly reach overparametrization, and generalize well from small amounts of data. To verify our results, we perform numerical simulations for a graph state classification task. Our work provides theoretical guarantees for equivariant QNNs, thus indicating the power and potential of GQML.more » « less
-
Abstract We definelazinessto describe a large suppression of variational parameter updates for neural networks, classical or quantum. In the quantum case, the suppression is exponential in the number of qubits for randomized variational quantum circuits. We discuss the difference between laziness andbarren plateauin quantum machine learning created by quantum physicists in McCleanet al(2018Nat. Commun.91–6) for the flatness of the loss function landscape during gradient descent. We address a novel theoretical understanding of those two phenomena in light of the theory of neural tangent kernels. For noiseless quantum circuits, without the measurement noise, the loss function landscape is complicated in the overparametrized regime with a large number of trainable variational angles. Instead, around a random starting point in optimization, there are large numbers of local minima that are good enough and could minimize the mean square loss function, where we still have quantum laziness, but we do not have barren plateaus. However, the complicated landscape is not visible within a limited number of iterations, and low precision in quantum control and quantum sensing. Moreover, we look at the effect of noises during optimization by assuming intuitive noise models, and show that variational quantum algorithms are noise-resilient in the overparametrization regime. Our work precisely reformulates the quantum barren plateau statement towards a precision statement and justifies the statement in certain noise models, injects new hope toward near-term variational quantum algorithms, and provides theoretical connections toward classical machine learning. Our paper provides conceptual perspectives about quantum barren plateaus, together with discussions about the gradient descent dynamics in Liuet al(2023Phys. Rev. Lett.130150601).more » « less
-
Abstract The optimization of quantum circuits can be hampered by a decay of average gradient amplitudes with increasing system size. When the decay is exponential, this is called the barren plateau problem. Considering explicit circuit parametrizations (in terms of rotation angles), it has been shown in Arrasmithet al(2022Quantum Sci. Technol.7045015) that barren plateaus are equivalent to an exponential decay of the variance of cost-function differences. We show that the issue is particularly simple in the (parametrization-free) Riemannian formulation of such optimization problems and obtain a tighter bound for the cost-function variance. An elementary derivation shows that the single-gate variance of the cost function isstrictly equalto half the variance of the Riemannian single-gate gradient, where we sample variable gates according to the uniform Haar measure. The total variances of the cost function and its gradient are then both bounded from above by the sum of single-gate variances and, conversely, bound single-gate variances from above. So, decays of gradients and cost-function variations go hand in hand, and barren plateau problems cannot be resolved by avoiding gradient-based in favor of gradient-free optimization methods.more » « less
-
Quantum Computing (QC) has gained immense popularity as a potential solution to deal with the ever-increasing size of data and associated challenges leveraging the concept of quantum random access memory (QRAM). QC promises quadratic or exponential increases in computational time with quantum parallelism and thus offer a huge leap forward in the computation of Machine Learning algorithms. This paper analyzes speed up performance of QC when applied to machine learning algorithms, known as Quantum Machine Learning (QML). We applied QML methods such as Quantum Support Vector Machine (QSVM), and Quantum Neural Network (QNN) to detect Software Supply Chain (SSC) attacks. Due to the access limitations of real quantum computers, the QML methods were implemented on open-source quantum simulators such as IBM Qiskit and TensorFlow Quantum. We evaluated the performance of QML in terms of processing speed and accuracy and finally, compared with its classical counterparts. Interestingly, the experimental results differ to the speed up promises of QC by demonstrating higher computational time and lower accuracy in comparison to the classical approaches for SSC attacks.more » « less
An official website of the United States government

