<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Quantum Depth in the Random Oracle Model</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>06/02/2023</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10421116</idno>
					<idno type="doi">10.1145/3564246.3585153</idno>
					<title level='j'>ACM Symposium on Theory of Computing</title>
<idno></idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Atul Singh Arora</author><author>Andrea Coladangelo</author><author>Matthew Coudron</author><author>Alexandru Gheorghiu</author><author>Uttam Singh</author><author>Hendrik Waldner</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[We give a comprehensive characterisation of the computational power of shallow quantum circuits combined with classical computation. Specifically, for classes of search problems, we show that the following statements hold, relative to a random oracle: (a) BPP QNC BPP ≠ BQP. This refutes Jozsa's conjecturein the random oracle model. As a result, this gives the first instantiatable separation between the classes by replacing the oracle with a cryptographic hash function, yielding a resolution to one of Aaronson's ten semi-grand challenges in quantum computing. (b) BPP QNC ⊈ QNC BPP and QNC BPP ⊈ BPP QNC . This shows that there is a subtle interplay between classical computation and shallow quantum computation. In fact, for the second separation, we establish that, for some problems, the ability to perform adaptive measurements in a single shallow quantum circuit, is more useful than the ability to perform polynomially many shallow quantum circuits without adaptive measurements. We also show that BPP QNC and QNC BPP are both strictly contained in BPP QNC BPP . (c) There exists a 2-message proof of quantum depth protocol. Such a protocol allows a classical verifier to efficiently certify that a prover must be performing a computation of some minimum quantum depth. Our proof of quantum depth can be instantiated using the recent proof of quantumness by Yamakawa and Zhandry.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>High depth circuits are believed to be strictly more powerful than low depth circuits, in the sense that having deeper circuits allows one to solve a larger set of problems. Indeed, this is a well established fact for both classical and quantum circuits of depth sub-logarithmic in the size of the input <ref type="bibr">[5,</ref><ref type="bibr">6,</ref><ref type="bibr">30,</ref><ref type="bibr">32,</ref><ref type="bibr">41]</ref>. However, for circuits of (poly)logarithmic depth and general polynomial depth, proving any sort of unconditional separation is challenging <ref type="bibr">[39]</ref>. In fact, there is not even an unconditional proof that the set of problems that can be solved by polylog-depth classical circuits, NC, is a strict subset of the set of problems solvable by poly-depth classical circuits, P (or BPP when allowing for randomness). The same is believed to be the case for the quantum analogues of these classes, QNC and BQP, respectively. Nevertheless, the strict containments NC &#8842; P and QNC &#8842; BQP are known to hold in the oracle setting and, in particular, relative to a random oracle <ref type="bibr">[35]</ref>. <ref type="foot">1</ref> This is a strong indication that there are problems in P (BQP) which cannot be parallelised so as to be solvable in NC (QNC). Under the random oracle heuristic, by replacing the random oracle with a cryptographic hash function, one can even provide concrete instantiations of such problems. A further indication of the separation between low and high depth computations is provided by certain inherently sequential cryptographic constructions such as time-lock puzzles and verifiable delay functions <ref type="bibr">[16,</ref><ref type="bibr">40]</ref>.</p><p>The study of circuit depth can also yield insights into the subtle relationship between quantum and classical computation by considering hybrid circuit models that combine quantum and classical computation <ref type="bibr">[11,</ref><ref type="bibr">21,</ref><ref type="bibr">28,</ref><ref type="bibr">31]</ref>. In this setting, one can ask the question: how powerful are poly-depth classical circuits, when augmented with polylog-depth quantum circuits? Could it be the case that interspersing BPP with QNC computations captures the full power of BQP computations? Jozsa famously conjectured that the answer is yes <ref type="bibr">[33]</ref>. Indeed, there is some evidence to support this conjecture, as the quantum Fourier transform, a central building block for many quantum algorithms, was shown to be implementable with log-depth quantum circuits <ref type="bibr">[25]</ref>. This also implies that Shor's algorithm can be performed by a BPP QNC machine, a polynomialtime classical computer having the ability to invoke a (poly)log depth quantum computer. <ref type="foot">2</ref> Moreover, in the oracle setting, a number of problems yielding exponential separations between quantum and classical computation require only constant quantum-depth to solve, providing further support for Jozsa's conjecture <ref type="bibr">[1,</ref><ref type="bibr">3,</ref><ref type="bibr">42]</ref>.</p><p>Despite the evidence in support of Jozsa's conjecture, it was recently shown that, in the oracle setting, the conjecture is false <ref type="bibr">[21,</ref><ref type="bibr">28]</ref>. Specifically, the results of <ref type="bibr">[21]</ref> (hereafter referred to as CCL) and <ref type="bibr">[28]</ref> (hereafter referred to as CM) considered two ways of interspersing poly-depth classical computation with &#119889;-depth quantum computation. The first is BPP QNC d , denoting problems solvable by a BPP machine that can invoke &#119889;-depth quantum circuits (whose outputs are measured in the computational basis). The second, QNC d BPP , denotes problems solvable by a &#119889;-depth quantum circuit that can invoke a BPP machine at each layer in the computation. <ref type="foot">3</ref> Later, borrowing terminology from <ref type="bibr">[11,</ref><ref type="bibr">21]</ref>, we will refer to the former circuit model as CQ d and the latter as QC d . However, for the purposes of this introduction, we will stick to the more familiar notation using complexity classes. Intuitively, BPP QNC d captures the setting of a classical computer that can invoke a &#119889;-depth quantum computer several times. Examples of this include quantum machine learning algorithms such as VQE or QAOA <ref type="bibr">[29,</ref><ref type="bibr">37]</ref>, though as mentioned, Shor's algorithm is also of this type. On the other hand, QNC d BPP captures a &#119889;-depth measurement-based quantum computation <ref type="bibr">[18,</ref><ref type="bibr">38]</ref>, where intermediate measurements are performed after each layer in the quantum computation. The outcomes of those measurements are processed by a poly-depth classical computation and the results are "fed" into the next quantum layer. CCL and CM showed that there exists an oracle relative to which BPP QNC d &#8746; QNC d BPP &#8842; BQP, for any &#119889; = polylog(&#119899;), with &#119899; denoting the size of the input. Notably, each work considered a different oracle for showing the separation. For CM, the oracle is the same one as for Childs' glued trees problem <ref type="bibr">[23]</ref>. For CCL, the oracle is a modified version of the oracle used for Simon's problem <ref type="bibr">[42]</ref>, where the modification involves performing a sequence of permutations, allowing them to enforce high quantum depth.</p><p>CCL and CM were the first results to provide a convincing counterpoint to Jozsa's conjecture. However, the main drawback of the CCL and CM results is that they are relative to oracles that are highly structured and it is unclear if they can be explicitly instantiated based on some cryptographic assumptions. Indeed, in his "Ten Semi-Grand Challenges for Quantum Computing Theory", Aaronson emphasizes this important distinction, and asks whether there is some instantiatable function that separates the hybrid models from BQP. In this work, we resolve Aaronson's question in the affirmative for the search variants of these classes.</p><p>In contrast to separations between different models of computation running in polynomial time, such as P and NP or BPP and BQP, where several plausible candidates exist for separating the classes, the case for depth separations is much more subtle. As was already observed in <ref type="bibr">[14]</ref>, no standard cryptographic assumption is known to yield a separation between NC and P. The best candidates for such a separation are sequential compositions of hash functions (under the random oracle heuristic) as shown in <ref type="bibr">[35]</ref> and the iterated exponentiation scheme of Rivest, Shamir and Wagner <ref type="bibr">[40]</ref>. Thus, informally, the best we could hope for in terms of an instantiatable separation between the hybrid models and BQP is a separation in the random oracle model which could then be instantiated using cryptographic hash functions. We note that while there are known counterexamples to the random oracle heuristic <ref type="bibr">[20]</ref>, these are generally contrived and do not apply to protocols where the random oracle heuristic has been used in practice (and, in particular, are not known to apply to the setting we consider here). Indeed, it has been shown that certain protocols proven secure with respect to a random oracle can also be concretely instantiated using correlation intractable hash functions <ref type="bibr">[19,</ref><ref type="bibr">34]</ref>. We also note that separating the hybrid models from BPP, rather than BQP, in the random oracle model already follows from Aaronson's Fourier Fishing problem <ref type="bibr">[1]</ref>. That problem, of sampling from the Fourier spectrum of the random oracle, is solvable in <ref type="foot">4</ref> QNC but not in BPP. Implicitly, this means that the hybrid search classes BPP QNC and QNC BPP are strictly larger than BPP relative to a random oracle.</p><p>Our work is concerned not only with separations between the hybrid models and BQP in the random oracle model, but also with giving a comprehensive characterization of quantum depth in that model. To that end, we first re-examine Jozsa's conjecture and argue that the natural class associated to "&#119889;-depth quantum computation combined with polynomial-time classical computation" is not BPP QNC d &#8746; QNC d BPP , but BPP QNC d BPP . This is because, if one has the ability to perform QNC d BPP computations, certainly it should also be possible to repeat this polynomiallymany times as well as perform classical processing in between the runs. Note that BPP QNC d &#8746; QNC d BPP &#8838; BPP QNC d BPP (in fact, we show strict containment). The separation we then obtain, relative to a random oracle, is BPP QNC d BPP &#8842; BQP, for any fixed &#119889; &#8804; poly(&#119899;). Going beyond this separation, we also show that the hybrid models BPP QNC d and QNC d BPP are separate from each other in both directions, relative to a random oracle (in fact, we show that BPP QNC O(1) &#8840; QNC d BPP and QNC BPP O(1) &#8840; BPP QNC d ), illustrating the subtle interplay between short-depth quantum computation and classical computation. Lastly, by combining the techniques that we develop with previous results on proof of quantumness protocols, we obtain proof of quantum depth protocols-protocols in which a BPP verifier, exchanging 2 messages<ref type="foot">foot_4</ref> with an untrusted quantum prover, can certify that the prover has the ability to perform quantum computations of a minimum depth.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Main Results</head><p>We now state our results more formally and provide some intuition about the proofs. We abuse the notation slightly and use the</p><p>(a) Illustration of QNC &#119889; and QNC BPP &#119889; circuits. standard decision complexity class names to refer to their search variants.</p><p>1.1.1 Lower Bounds on Quantum Depth. We first show the following separation.</p><p>Theorem 1 (informal). Fix any function &#119889; &#8804; poly(&#119899;). Then, relative to a random oracle, <ref type="foot">6</ref> it holds that BPP QNC BPP &#119889; &#8842; BQP.</p><p>As motivated earlier, we take the class BPP QNC d BPP to capture computations performed by a combination of &#119889;-depth quantum computation and polynomial-depth classical computation. The interpretation of our result is that BPP QNC d BPP can be separated from BQP using the least structured oracle possible, a random oracle. Together with the (quantum) random oracle heuristic, by instantiating the oracle with a cryptographic hash function like SHA-2 or SHA-3, this yields the first plausible instantiation of a problem solvable in BQP but not in BPP QNC d BPP . This provides a resolution to Aaronson's challenge. The main technical innovation that allows us to achieve the separation is a general lifting lemma that takes any problem separating BPP from BQP in the random oracle model, which additionally satisfies a property that we call classical query soundness, and constructs a problem separating BPP QNC d BPP and BQP. We show that several known problems satisfy this property. Our lifting lemma is inspired by <ref type="bibr">[21]</ref>, and crucially extends their analysis beyond highly structured oracles. We describe this lifting lemma more precisely in Subsection 1.2.1.</p><p>1.1.2 Proofs of Quantum Depth. It is natural to wonder whether Theorem 1 yields an efficient test to certify quantum depth, i.e. a proof of quantum depth. A proof of quantum depth is a more fine-grained version of a proof of quantumness: rather than distinguishing between quantum and classical computation, a proof of quantum depth protocol can distinguish between provers having large or small quantum depth. We show that instantiating our lifting lemma with a problem whose solution is efficiently verifiable immediately yields a proof of quantum depth. One such problem <ref type="foot">7</ref>is due to Yamakawa and Zhandry <ref type="bibr">[43]</ref>. More precisely, we have the following.</p><p>Theorem 2 (informal). Let &#119899; be the security parameter and fix any function &#119889; &#8804; poly(&#119899;). In the random oracle model, there exists a two-message protocol between a poly-time classical verifier and a quantum prover such that,</p><p>&#8226; Completeness: There is a BQP prover which makes the verifier accept with probability at least 1 -negl(&#119899;) &#8226; Soundness: No malicious BPP QNC BPP &#119889; prover can make the verifier accept with probability greater than negl(&#119899;).</p><p>We emphasise that considering protocols with more than two messages leads to difficulties in formalising the notion of quantum depth. For instance, one can construct protocols where the prover is forced to hold &#119903; single qubit states and subsequently measures them. Information about the basis in which to measure each of these qubits is sent one at a time by the verifier over &#119903; messages (the verifier waits for the response to each measurement, before sending the next basis). The measurement results are used by the verifier to ensure soundness (each qubit is measured in its preparation basis and so the outcomes are completely determined). It is not hard to show that if the prover measures these qubits without knowing the measurement basis, it cannot succeed except with negligible probability. If one attempts to model the prover as a BPP QNC &#119889; or QNC BPP &#119889; circuit, then, because of the delay between messages, it appears that &#119889; &#8805; &#119903; is necessary. However, this can be seen as an artefact of the modelling choice: in practice, the prover only needs &#119889; single qubit quantum computers with quantum depth 1 where the last gate can be delayed until the appropriate message is received in order to pass the test. Essentially, this approach only tests the prover's ability to maintain the coherence of the qubits it received, without actually testing the depth of the circuit it has to perform. In Subsection 1.3, we discuss a possible resolution that captures quantum depth in the interactive setting.</p><p>1.1.3 Tighter Bounds. While Theorem 1 establishes that BPP QNC BPP &#119889; does not capture the computational power of BQP for any fixed &#119889; &#8804; poly(&#119899;), it is not a priori clear if, for instance,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>BPP QNC BPP</head><p>2&#119889;+O (1) is strictly larger than BPP QNC BPP &#119889; . Indeed, we show that the answer is affirmative.</p><p>Theorem 3 (informal). Fix any function &#119889; &#8804; poly(&#119899;). Relative to a random oracle, it holds that 8 BPP QNC BPP &#119889; &#8842; BPP QNC BPP 2&#119889;+O (1) .</p><p>Formally, the theorems treat a call to the quantum random oracle as a depth-1 quantum gate. In practice, if instead the gate requires depth &#8467;, then &#119889; can be replaced by &#119889;&#8467;. We remark that there exist hash functions that are thought to be quantum-secure which require only logarithmic depth to evaluate <ref type="bibr">[7,</ref><ref type="bibr">36]</ref>. Further, there is reason to believe that such hash functions could also be constructed in &#8467; = O(1) depth. In particular, if one is only concerned with specific cryptographic properties (such as collision resistance), then generic constructions are known which convert log-depth hash functions into ones that require only constant depth <ref type="bibr">[9]</ref>. <ref type="bibr">1.1.4</ref> Separations between Hybrid Quantum Depth Classes. While both BPP QNC and QNC BPP capture some notion of a hybrid between efficient classical computation and shallow quantum computation, the relationship between the two is not immediately clear. To get a slightly better intuition about the two models, one can think of BPP QNC as capturing an efficient computation that contains polynomially many shallow quantum circuits (separated by measurements and classical computation). On the other hand, one can think of QNC BPP as a single shallow quantum circuit, where one is allowed to make partial measurements of some of the wires, and choose the next gates adaptively. While it may not be surprising that there exist problems that can be solved in BPP QNC but not in QNC BPP , it turns out that the two classes are in fact incomparableeach class contains problems that the other does not, relative to a random oracle.</p><p>Theorem 4 (informal). Fix any function &#119889; &#8804; poly(&#119899;). Relative to a random oracle, it holds that</p><p>The second separation is arguably more surprising. It says that, relative to a random oracle, there are problems that can be solved by a single shallow (in fact, constant-depth) quantum circuit with adaptive measurements but cannot be solved by circuits with polynomially many shallow quantum circuits without adaptive measurements. The problem that shows QNC BPP O(1) &#8840; BPP QNC &#119889; is a variant of the proof of quantumness from <ref type="bibr">[17]</ref>. The key technical innovation to achieve this separation is a theorem that characterises the 8 and more generally, that QNC 2&#119889;+O(1) &#8840; BPP QNC BPP &#119889; . structure of strategies that succeed in the protocol of <ref type="bibr">[17]</ref> (this is discussed further in Section 1.2.2 ). This "structure theorem" crucially strengthens a similar theorem from <ref type="bibr">[26]</ref>, and may be of independent interest.</p><p>Finally, we examine the relationship between BPP </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Main Technical Contributions</head><p>1.2.1 Lifting Lemmas. One of the main technical contributions of our work is to prove two general lifting lemmas. These lemmas take problems, defined relative to a random oracle, that are classically hard (in a stronger sense, defined next) and create new problems which are, in addition, hard for specific hybrid quantum depth classes. We describe these lifting lemmas a bit more precisely.</p><p>We say that a problem (defined with respect to the random oracle) is classical query sound if the following holds: any (potentially unbounded time) algorithm which makes only polynomially many classical queries to the random oracle (i.e. no superposition queries), succeeds at solving the problem with at most negligible probability. It turns out that the problem introduced by YZ satisfies this property. Another problem which satisfies this property is inspired by the proof of quantumness protocol defined by Brakerski et al. <ref type="bibr">[17]</ref> (hereafter referred to as BKVV). <ref type="foot">9</ref> For such problems, the following holds.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lemma 6 (informal, simplified).</head><p>There is a procedure<ref type="foot">foot_8</ref> that takes a classical query sound problem P &#8712; BQP and creates a new problem</p><p>Observe that this lemma makes the problem hard for the most general notion of quantum depth we have considered. To give some intuition about how it is derived, suppose we have a problem P which is classical query sound and denote the random oracle as &#119867; . Then P &#8242; = &#119889;-Rec[P] is the same problem, defined with respect to a sequential composition of &#119889; + 1 random oracles,</p><p>In essence, we have substituted &#119867; with H . This new problem will retain classical query soundness, as H behaves like a random oracle. But in addition, we have now made it so that querying H effectively requires depth &#119889; + 1. As QNC &#119889; has depth &#119889;, only the BPP parts </p><p>Fine grained advantage of quantum depth Table <ref type="table">2</ref>: (Simplified) Separations of hybrid quantum depth with respect to the random oracle. The results hold, not only for log but for any fixed polynomially-bounded function.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Result Physical Interpretation</head><p>BPP QNC O(1) &#8840; QNC BPP Running poly many constant depth quantum circuits (with no adaptive measurements) cannot be simulated by running a single log depth quantum circuit with adaptive measurements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>QNC BPP</head><p>O(1) &#8840; BPP QNC Running a single constant depth quantum circuit with adaptive measurements cannot be simulated by running poly many log depth quantum circuits (with no adaptive measurements).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>BPP QNC BPP</head><p>O(1) &#8840; BPP QNC &#8746; QNC BPP Evidence that it is not enough to consider BPP QNC and QNC BPP when studying quantum depth.</p><p>Running poly many constant depth quantum circuits with adaptive measurements cannot be simulated using either (a) poly many log depth quantum circuits with no adaptive measurements, or by (b) a single log depth quantum circuit with adaptive measurements.</p><p>of BPP QNC BPP &#119889; will be able to query H . We can therefore simulate the BPP QNC BPP &#119889; algorithm with an exponential time algorithm that is limited to polynomially many queries to H . By classical query soundness, such an algorithm cannot solve P &#8242; , which yields the desired result.</p><p>This was a simplified description of our result. In fact, we show a more refined statement that relates the depth required to solve P &#8242; to the depth required to solve P. In addition, arguing that H behaves like a random oracle and that QNC &#119889; cannot query H requires a careful and more involved analysis. We use Lemma 6 to establish Theorem 3.</p><p>Our second lifting lemma produces a problem that is hard for QNC BPP &#119889; , starting from a problem that satisfies what we call offline soundness. Consider a two phase algorithm consisting of: an online phase which is a poly-time classical algorithm with access to the random oracle followed by an offline phase which is an unbounded(time) algorithm with no access to the random oracle. Then, offline soundness requires that no such two phase algorithm succeeds at solving the problem with non-negligible probability. It turns out, again, that both YZ and BKVV satisfy this property.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lemma 7 (informal).</head><p>There is a procedure 11 which takes a problem P &#8712; QNC O(1) with offline soundness and creates a new problem P &#8242; := &#119889;-Ser[P] such that P &#8242; &#8713; QNC BPP &#119889; and P &#8242; &#8712; BPP QNC O (1) .</p><p>Again, we actually show a slightly more general upper bound which depends on the depth required to solve P. We use Lemma 7 11 &#119889;-Ser[ &#8226;] is meant to be short for &#119889;-Serial.</p><p>to establish BPP QNC O(1) &#8840; QNC BPP &#119889; (first separation of Theorem 4). Establishing the other direction (QNC BPP O(1) &#8840; BPP QNC &#119889; ) is quite involved and relies heavily on the structure of the problem we consider (explained below). Consequently, it is unclear whether there exists a general lifting lemma that yields hardness for BPP QNC &#119889; .</p><p>We remark that, by using Lemma 7 to lift the problem that yields <ref type="bibr">[17]</ref>. Another technical contribution of this work, which may be of independent interest, is to prove a theorem characterizing the structure of strategies that are successful at the proof of quantumness from <ref type="bibr">[17]</ref>. This theorem is a crucial strengthening of a theorem from <ref type="bibr">[26]</ref>. We employ this theorem as an intermediate step to establish the hybrid separation,</p><p>Recall, informally, that the proof of quantumness from <ref type="bibr">[17]</ref> requires the prover to succeed at the following task: given access to a 2-to-1 function &#119892;, and to a random oracle &#119867; with a one-bit output, find a pair (&#119910;, &#119903; ) such that</p><p>where {&#119909; 0 , &#119909; 1 } = &#119892; -1 (&#119910;). This can be solved in QNC O(1) as follows:</p><p>(i) Evaluate &#119892; on a uniform superposition of inputs, yielding &#119909; |&#119909;&#10217; |&#119892;(&#119909;)&#10217;, (ii) Measure the image register obtaining some outcome &#119910; and a state</p><p>(iii) Query a phase oracle for &#119867; to obtain ((-1) &#119867; (&#119909; 0 ) |&#119909; 0 &#10217; + (-1) &#119867; (&#119909; 1 ) |&#119909; 1 &#10217;) |&#119910;&#10217;, (iv) Make a Hadamard basis measurement of the first register, obtaining outcome &#119903; . Informally, our structure theorem establishes that querying at a superposition of pre-images is essentially the only way to succeed (provided finding a collision for &#119892; is hard-this is the case when &#119892; is a trapdoor claw-free function, as in <ref type="bibr">[17]</ref>, but more generally our theorem also holds e.g. when &#119892; is a uniformly random 2-to-1 function). Denote by &#119899; the bit-length of strings in the domain of &#119892;.</p><p>Theorem 8 (informal). Let &#119875; be any BQP prover that succeeds with 1 -negl(&#119899;) probability at the proof of quantumness protocol from <ref type="bibr">[17]</ref>, by making &#119902; queries to the oracle &#119867; . Then, with 1 -negl(&#119899;) probability over pairs (&#119867;, &#119910;), the following holds. Let &#119901; &#119910; |&#119867; be the probability that &#119875; &#119867; outputs &#119910;, and let &#119909; 0 ,&#119909; 1 be the pre-images of &#119910;. Then, for all &#119887; &#8712; {0, 1}, there exists &#119894; &#8712; [&#119902;] such that the state of the query register of &#119875; &#119867; right before the &#119894;-th query has weight</p><p>Note that a version of the above theorem that applies to provers who win with probability non-negligibly greater than 1  2 also holds (but we stated the close-to-ideal version for simplicity). We provide a sketch of how this theorem is used in the proof of</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3">Discussion and Open Problems</head><p>Separations of decision classes and going beyond the random oracle model. Our separations are with respect to search classes. What about decision classes? The Aaronson-Ambainis conjecture <ref type="bibr">[2]</ref> states that one cannot separate the decision versions of BPP and BQP in the random oracle model. Assuming the conjecture is true, this also implies that there cannot be a separation between the decision versions of BPP QNC BPP &#119889; and BQP in the random oracle model. Thus, for the case of decision classes, it would be interesting to see whether there exists some structured oracle separation for which the oracle can be instantiated based on a suitable computational assumption. Both <ref type="bibr">[21]</ref> and <ref type="bibr">[28]</ref> suggested the possibility of using either virtual black-box (VBB) obfuscation or indistinguishability obfuscation (iO) to instantiate their oracles. For the former, one difficulty is that it is known that one cannot VBB obfuscate general functions <ref type="bibr">[12,</ref><ref type="bibr">13]</ref>. Irrespective of this fact, however, there is a more general obstacle that applies to either type of obfuscation. As we mentioned in the introduction, it was noted in <ref type="bibr">[14]</ref> that no standard cryptographic assumption (not even the existence of iO) is known to imply a depth separation (even between P and NC). It therefore seems that one either has to use non-standard assumptions to prove the separations or make a significant advancement either in refuting the Aaronson-Ambainis conjecture or in proving P &#8800; NC from a standard cryptographic assumption.</p><p>The limitations of using cryptographic assumptions to prove depth separations also apply to the search classes. In some sense, separations with respect to a random oracle are the best we can hope for, given the current techniques in computational complexity theory. This is peculiar, since one would imagine that using more structured non-oracular problems would allow one to prove stronger separations. The random oracle is the least structured type of oracle, but the fact that it is an oracle helps in establishing provable lower bounds.</p><p>Further questions in the random oracle model. Recall that our lifting lemma, turning a classical query sound problem into a problem which can separate BPP QNC BPP &#119889; and BQP in the random oracle model, could be instantiated with the proof of quantumness from <ref type="bibr">[43]</ref>. The resulting problem inherits the property that solutions can be publicly verified. We thus obtain a proof of quantum depth that is publicly verifiable. Unlike a proof of quantumness, the proof of quantum depth is sound against a family of quantum provers (in this case BPP QNC BPP &#119889; provers). Can we further push this quantum soundness to obtain verification of BQP with a BPP verifier relative to a random oracle?</p><p>We have also seen that making use of a problem inspired by the Brakerski et al. <ref type="bibr">[17]</ref> proof of quantumness allows us to prove more fine grained separations between hybrid classes. It is then natural to ask, whether these separations also yield finer grained proofs of quantum depth (which are sound against BPP QNC BPP &#119889; provers and complete for a BPP QNC BPP 2&#119889;+O(1) prover). This does not immediately follow from our results, as the problem we construct from BKVV is not efficiently verifiable, and our current techniques do not directly extend to the computationally-bounded setting. We therefore leave this as an open problem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Generalizing beyond BPP QNC BPP</head><p>&#119889; . We have argued that BPP QNC BPP &#119889; is the most natural class capturing the notion of &#119889;-depth quantum computation, combined with polynomial-depth classical computation. However, for the purpose of certifying quantum depth, as we have mentioned earlier (and as we discuss in more detail in <ref type="bibr">Example 12)</ref>, the situation becomes more subtle when the certification protocol involves interaction. We therefore propose that any protocol which establishes quantum depth &#119889; and uses &#119903; rounds of interaction should be sound against at least an &#119903; level generalization of BPP QNC BPP &#119889; (e.g. a 2 level generalization with quantum depth &#119889; would be BPP QNC &#119889; BPP QNC BPP &#119889; -here 2 counts the number of times QNC &#119889; appears in the tower of complexity classes, so that an &#119903; level generalisation would have &#119903; appearances of QNC &#119889; ). In our case, since the proof of depth protocols are single-round, we show the necessary soundness against a BPP QNC BPP &#119889; prover. Of course, there are other possible ways to define hybrid &#119889;-depth quantum-classical computation. For instance, one can define the class QDepth &#119889; of problems solved by polynomial sized circuits with quantum and classical gates where the key constraint is that the longest path connecting quantum gates (with quantum wires) is at most &#119889;. We expect our separating problems (and &#119889;-Rec[P] in general, for classical query sound P) to not be contained in QDepth &#119889; . We also expect that Q &#119889; H &#8842; QDepth &#119889; , but we leave the proof as future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.4">Previous Work</head><p>We compare our results to the previous works <ref type="bibr">[21]</ref>, <ref type="bibr">[28]</ref>, <ref type="bibr">[11]</ref>, and <ref type="bibr">[22]</ref>.</p><p>Comparison to <ref type="bibr">[21]</ref>, <ref type="bibr">[28]</ref> and <ref type="bibr">[11]</ref>. Compared to previous work on the topic, our work gives a comprehensive treatment of the complexity of hybrid quantum-classical computation.</p><p>As mentioned earlier, the primary difference compared to <ref type="bibr">[21]</ref> and <ref type="bibr">[28]</ref> is that all of our separations are with respect to a random oracle, rather than with respect to highly structured oracles. However, one caveat is that our separations are for search problems. Our contribution is also conceptual. We propose BPP QNC BPP &#119889; as the appropriate model to capture "&#119889;-depth quantum computation combined with polynomial-time classical computation". While <ref type="bibr">[21]</ref> and <ref type="bibr">[28]</ref> showed that BPP QNC &#119889; &#8746; QNC BPP &#119889; &#8840; BQP, we show the stronger result that BPP QNC BPP &#119889; &#8840; BQP. Our work also shows separations between different hybrid models. Such separations were considered in <ref type="bibr">[11]</ref>, where they are again proven only with respect to highly structured oracles.</p><p>In terms of techniques, we take inspiration and ideas from both <ref type="bibr">[21]</ref> and <ref type="bibr">[11]</ref>. In particular we build on two key ideas-the sampling argument and domain hiding. One of the main contributions of our analysis is to abstract and generalise these techniques beyond their original scope which was tailored to specific promise problems. While most of our results build on these techniques, we also point out that to prove the separation between the hybrid models QNC BPP O(1) &#8840; BPP QNC &#119889; we use entirely different ideas. In particular, as an intermediate step, we establish a theorem that characterizes the structure of strategies that succeed in the proof of quantumness of BKVV, which may be of independent interest.</p><p>Comparison to <ref type="bibr">[22]</ref>. The work of <ref type="bibr">[22]</ref> was the first to consider proofs of quantum depth. However, the notion of soundness that they propose, and their corresponding protocol (in the single prover setting), suffers from the issues that we discussed after Theorem 2 (and in Example 12 below).</p><p>In particular, their protocol can be spoofed by a &#119889; level tower of</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>BPP QNC BPP</head><p>O(1) (as described in Subsection 1.3). In practical terms, this means that it can be spoofed by running several constant depth quantum computers in parallel, provided the "idle coherence time" of each quantum computer is longer than the time that elapses between messages in the protocol. In contrast, our proof of depth protocol does not suffer from this issue and can be used to certify that the prover is able to perform computations "beyond" BPP QNC BPP &#119889; .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">TECHNICAL OVERVIEW</head><p>Here we give a high level technical overview of the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Bounds on Quantum Depth</head><p>In this subsection, we describe the proof of Theorem 1. As mentioned previously, our main technical contribution is a general lifting lemma that takes any problem separating BPP from BQP in the random oracle model, which additionally satisfies a property that we call classical query soundness, and constructs a problem separating BPP QNC d BPP and BQP. We first explain the key idea behind this construction. To be concrete, after describing the key idea, we restrict to an NP search problem due to Yamakawa and Zhandry <ref type="bibr">[43]</ref>, which satisfies classical query soundness (this problem is particularly appealing because it is in NP, and thus solutions can be publicly verified, however we emphasize that other known search problems that are not in NP can also be used for the separation). We then build towards a proof that this problem is not in BPP QNC BPP &#119889; by considering hardness for the three special cases QNC &#119889; , QNC BPP &#119889; and BPP QNC &#119889; . The desired result is obtained by combining the ideas in these three cases.</p><p>Let P be a (search) problem, defined relative to a random oracle &#119867; , that separates BPP from BQP. Suppose that P is such that it requires quantum access to &#119867; in order to be solved with polynomially many queries (classical query soundness will eventually require a bit more than this). As mentioned in Subsection 1.2.1, the first natural idea to lift this to a separation between low quantum depth and polynomial quantum depth is to replace the evaluation of &#119867; with a sequential evaluation of random oracles. For example, suppose that originally &#119867; : &#931; &#8594; {0, 1} &#119899; . Then, let &#119867; 0 , . . . , &#119867; &#119889;-1 : &#931; &#8594; &#931;, and</p><p>the problem that is identical to P except that it is relative to H . Then, it is natural to imagine that P &#8242; requires quantum depth at least &#119889; + 1 to solve. This idea does not quite work right away, since H , as defined, is not actually a uniformly random oracle any more. This is because with every &#119867; &#119894; that is added, the number of collisions in H increases (on average). To remedy this, one could assume that &#119867; 0 , . . . , &#119867; &#119889;-1 are random permutations (although note that random permutations cannot be generically constructed from random oracles). A similar idea works in a different setting, for arguing about the post-quantum security of "proofs of sequential work" <ref type="bibr">[15]</ref>. However, in our case, the analysis is complicated by the fact that we consider hybrid models. CCL were the first to consider a variant of sequential hashing (sequential permutations), in the context of hybrid models. However, their analysis only works for certain structured oracles. In this work, we adapt their ideas to the random oracle setting and overcome these difficulties.</p><p>Lifting P &#8713; BPP to P &#8713; BPP QNC BPP &#119889; . Given a problem P with respect to &#119867; , we define the problem P = &#119889;-Rec[P] to be P with respect to H = &#119867; &#119889; &#8226; &#8226; &#8226; &#8226; &#8226; &#119867; 0 where &#119867; 0 , . . . , &#119867; &#119889; are independent random oracles with the following domains and co-domains: &#119867; 0 :</p><p>Notice that &#119867; 0 is not surjective, as its codomain is much larger than its image. <ref type="foot">12</ref> In fact, this is also true for &#119867; &#119894; &#8226; &#8226; &#8226; &#8226; &#8226; &#119867; 0 , for all &#119894; &lt; &#119889;. This and the fact that the &#119867; &#119894; functions are random, have two important consequences. First, it means that with high probability</p><p>and so H behaves like a random oracle. Consequently, P &#8242; inherits the soundness and completeness of P. Second, it means that one can apply a "domain hiding" technique, which, at a high level, works as follows. One way of evaluating H at &#119909; &#8712; &#931; is to sequentially compose &#119867; 0 , &#119867; 1 , . . . , &#119867; &#119889; which would require depth &#119889; + 1. Intuitively, it seems unlikely that there is a more depth efficient way of evaluating H because the domain on which the &#119867; &#119894; 's need to be evaluated (which is</p><p>) is getting shuffled and lost in an exponentially larger domain (which is &#931; &#119889; &#8242; ). Therefore, even though one has access to all L = (&#119867; 0 , &#119867; 1 , . . . , &#119867; &#119889; ) oracles at the first layer of depth, one only knows that &#119867; 0 needs to be queried at &#931; but the algorithm has no information about where the relevant domains of &#119867; 1 . . . &#119867; &#119889; are. At the second depth layer, the algorithm can learn &#119867; 0 (&#931;) and so learns where to query &#119867; 1 but, and this needs to be shown, it still does not know where the relevant domains of &#119867; 2 , . . . &#119867; &#119889; are. By starting with a sufficiently large expansion, i.e. a sufficiently large &#119889; &#8242; &gt; &#119889;, this argument can be repeated until depth &#119889; where the relevant domain of &#119867; &#119889; still remains hidden. Thus, even though P &#8242; can potentially be solved with &#119889; + 1 depth, it cannot be solved with depth &#119889;. This is the basic idea behind why the problem is not in QNC &#119889; . Instead of working with P and &#119889;-Rec[P] abstractly, we consider the following concrete problem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.1">&#119889;-CodeHashing |</head><p>The problem. We refer to the problem introduced by Yamakawa and Zhandry <ref type="bibr">[43]</ref> as CodeHashing in this work. The problem is stated in terms of a family of error-correcting codes called suitable codes. For our purposes, it suffices to think of suitable codes as a family of sets {&#119862; &#120582; } &#120582; where each &#119862; &#120582; is a set of codewords {(x 1 , . . . x &#119899; )} with each coordinate x &#119894; belonging to some alphabet &#931;. The size of this alphabet, |&#931;| = 2 &#120582; &#920;(1) is exponential in &#120582;, and the number of components &#119899; = &#920;(&#120582;) essentially equals &#120582;.</p><p>CodeHashing is defined as follows.</p><p>Definition 9 (CodeHashing; informal). Let {&#119862; &#120582; } &#120582; be a suitable code and let &#119867; : {0, 1} log &#119899; &#215; &#931; &#8594; {0, 1} be a random oracle. Given a description of the suitable code (e.g. as parity check matrices) and oracle access to &#119867; , on input 1 &#120582; , the problem is to find a codeword x = (x 1 . . . x &#119899; ) &#8712; &#119862; &#120582; such that 13 &#119867; (&#119894; ||x &#119894; ) = 1 for all &#119894; &#8712; {1 . . . &#119899;}.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Note that</head><p>CodeHashing is an NP search problem, since from, e.g. the parity check matrix of the code, it is easy to verify that x is indeed a codeword and with a single parallel query (&#119899; queries in total) to &#119867; , one can check that it hashes correctly.</p><p>YZ shows that CodeHashing satisfies the following two properties.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lemma 10 (Paraphrased from YZ). The following hold.</head><p>&#8226; Completeness: There is a QPT machine which solves CodeHashing with probability 1 -negl(&#120582;) and makes only one parallel query to &#119867; . &#8226; Soundness: Every (potentially unbounded time) classical circuit which makes at most 2 &#120582; &#119888; queries to &#119867; , with &#119888; &lt; 1, solves CodeHashing with probability at most 2 -&#937; (&#120582;) .</p><p>The fact that soundness holds against unbounded time classical circuits which make only poly-many queries to the random oracle is essential in proving that BPP QNC BPP &#8842; BQP. Applying our lifting map, &#119889;-Rec[P] on CodeHashing we obtain the following. 14   Definition 11 (&#119889;-CodeHashing; informal). Let {&#119862; &#120582; } &#120582; be a suitable code, and</p><p>where &#119867; 0 , . . . , &#119867; &#119889; are as in Section 2.1. Given a description of the suitable code, access to random oracles L = (&#119867; 0 . . . &#119867; &#119889; ), on input 1 &#120582; , find a codeword for all &#119894; &#8712; {1 . . . &#119899;}.</p><p>To convey the key ideas behind the proof that &#119889;-CodeHashing &#8713; BPP QNC BPP &#119889; , we first consider the QNC &#119889; case in some more detail, and extend the analysis to QNC BPP &#119889; . We then analyse the BPP QNC &#119889; case, which uses a technique called the "sampling argument" due 13 We use &#119886; | |&#119887; to mean concatenation of &#119886; and &#119887;. 14 We used bit i [ H(</p><p>to <ref type="bibr">[27]</ref>. These ideas were first considered in the structured oracle setting by <ref type="bibr">[21]</ref> and <ref type="bibr">[11]</ref>. We adapt them to show &#119889;-CodeHashing &#8713; BPP QNC BPP &#119889; relative to a random oracle.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.2">&#119889;-CodeHashing</head><p>Base sets. We started our discussion in Subsection 2.1 by observing that the analysis is simplified by taking &#119867; 0 . . . &#119867; &#119889;-1 to be injective functions. However, for a large enough &#119889; &#8242; , it is not hard to see that this is indeed the case on an appropriately restricted domain. The sets which describe this restricted domain are chosen randomly. We call them base sets and denote them by &#119878; 01 , . . . &#119878; 0&#119889; (corresponding to &#119867; 1 , . . . &#119867; &#119889; respectively). Observe that &#119867; 0 maps &#931; to &#931; &#119889; &#8242; (which is exponentially larger than &#931;; recall that |&#931;| = 2 &#120582; &#920;(1) ) and, since &#119867; 0 is a random function, the probability that this mapping is injective is 1 -negl(&#120582;). Pick any set &#119878; 01 &#8838; &#931; &#119889; &#8242; uniformly at random in the domain of &#119867; 1 subject to two constraints: (1) it includes &#119867; 0 (&#931;), i.e. the domain of &#119867; 1 on which the value of H depends, and (2) its size is |&#119878; 01 | = |&#931;| &#119889;+2 . The first constraint ensures that the domain we care about is included in the base sets and the second ensures that: (a) |&#119878; 01 | is exponentially smaller than |&#931;| &#119889; &#8242; and (b) |&#119878; 01 | is large enough for applying "domain hiding" as mentioned above. Define &#119878; 0&#119894; := &#119867; &#119894;-1 (. . . &#119867; 1 (&#119878; 01 ) . . . ) to be the image of &#119878; 01 through the first 1 to (&#119894; -1)'th oracles for &#119894; &#8712; {2 . . . &#119889; }. Let &#119864; denote the event that &#119867; 0 is injective and &#119867; 1 . . . &#119867; &#119889;-1 are injective on the base sets. We show that &#119864; (given our choice for &#119889; &#8242; ), occurs with overwhelming probability. In the subsequent discussion, we assume that base sets have been selected and that &#119864; occurs.</p><p>Proof idea. We describe the proof that &#119889;-CodeHashing &#8713; QNC &#119889; in some more detail, which implements the previously described "domain hiding" idea and proceeds via a hybrid argument. Denote a QNC &#119889; circuit that makes &#119889; parallel calls to the oracle L = (&#119867; 0 , . . . &#119867; &#119889; ) by</p><p>Here, &#120588; 0 is some initial state, &#119880; &#119894; are single layered unitaries, and the composition is meant to act as conjugation, i.e. &#119880; 1 &#8226; &#120588; 0 = &#119880; 1 &#120588; 0 &#119880; &#8224; 1 . We show that the behaviour of such a circuit, i.e. its probability of outputting a valid answer, is negligibly close to the behaviour of another circuit</p><p>where M 1 , . . . M &#119889; are "shadow oracles" corresponding to L that contain no information about the values taken by H on &#931;. Clearly then, this circuit cannot be solving &#119889;-CodeHashing because it never queries H . This in turn means that the original circuit also cannot solve &#119889;-CodeHashing, which implies &#119889;-CodeHashing &#8713; QNC &#119889; . It remains to define M 1 . . . M &#119889; and to argue that the two circuits have essentially the same behaviour. Using a hybrid argument, one can establish the latter by showing that the following are close in trace distance: (1)</p><p>and so on. To convey intuition, we sketch these steps one at a time, and we define M 1 . . . M &#119889; as we proceed. We restrict to base sets &#119878; 01 . . . &#119878; 0&#119889; as described above.</p><p>Hybrid 1. Let &#119878; 1&#119895; := &#119867; &#119895;-1 (&#119878; 1,&#119895;-1 ) be the propagation of &#119878; 11 through &#119867; 1 to &#119867; &#119895;-1 . Here, we are trying to define a sequence of sets (&#119878; 11 , . . . &#119878; 1&#119889; ) on which we require that M 1 outputs &#8869; and outside of these sets, we require that M 1 behaves just like L, i.e. if one denotes M 1 = (&#119867; 0 , &#119872; 11 , . . . &#119872; 1&#119889; ), then we require that &#119872; 1&#119894; behaves as &#119867; &#119894; outside &#119878; 1&#119894; and outputs &#8869; inside &#119878; 1&#119894; . To be concise, we will say that M 1 is a shadow oracle of L with respect to (&#119878; 11 . . . &#119878; 1&#119889; ). Why do we want this behaviour? For &#119878; &#119894; := &#119867; &#119894;-1 (. . . &#119867; 0 (&#931;) . . . ), M 1 clearly contains no information about H on &#931;, since &#119878; &#119895; &#8838; &#119878; 1&#119895; . But why couldn't we just have chosen (&#119878; 1 . . . &#119878; &#119889; ) instead of (&#119878; 11 . . . &#119878; 1&#119889; ) to define M 1 ? Briefly, this is because choosing to hide an exponentially larger set (note that |&#119878; 11 | = |&#931;| &#119889;+1 while |&#119878; 1 | = |&#931;|) allows us to easily apply similar arguments in the subsequent hybrids. This will become evident shortly. Recalling our goal, we want to establish that L &#8226; &#119880; 1 &#8226; &#120588; 0 and M 1 &#8226; &#119880; 1 &#8226; &#120588; 0 are close in trace distance. To do this, we use the so-called one-way to hiding (O2H) lemma <ref type="bibr">[8]</ref>. Informally, the lemma, as applied to our situation, says that if (a) the input state &#120588; 0 contains no information about the set where L and M 1 behave differently, and (b) the probability of finding any element inside this set is negligible, then the trace distance between the two states of interest is negligible. The lemma clearly applies in our case because (a) initially the algorithm contains no information about L (it has not yet made any queries) and (b) the probability of finding any element in the set &#119878; 1&#119894; where L and M 1 behave differently, without knowing anything about L, is at most |&#119878; 1&#119894; |/|&#119878; 0&#119894; | = negl(&#120582;), for each &#119894; &#8712; {1 . . . &#119889; }, and thus still negligible by a union bound.</p><p>In this step, we will see the advantage of having chosen a sequence of sufficiently large sets (&#119878; 11 , . . . &#119878; 1&#119889; ) where M 1 outputs &#8869;. Let us begin with examining the information contained in &#120588; 1 about L. In the previous case, &#120588; 0 contained no information about L. Since &#120588; 1 only learns about L by querying M 1 , it suffices to examine the information contained in M 1 . Since M 1 does not hide any information about &#119867; 0 , &#120588; 1 could have learnt &#119878; 1 = &#119867; 0 (&#931;). Recall also that &#119878; 1 &#8838; &#119878; 11 . This means that if one were to take M 2 equal to M 1 , then one cannot expect L&#8226;&#119880; 2 &#8226; &#120588; 1 to be close to M 2 &#8226;&#119880; 2 &#8226; &#120588; 1 in general because &#119880; 2 could query the oracle at &#119878; 1 and the outputs of the two circuits would be different with probability one-M 1 outputs &#8869; while L does not. Consequently, when constructing M 2 , we do not hide anything about &#119867; 1 . As for &#119867; 2 . . . &#119867; &#119889; , note that, M 1 contains no information about the behaviour of L inside &#119878; 12 , &#119878; 13 . . . &#119878; 1&#119889; . We can therefore, treat &#119878; 12 . . . &#119878; 1&#119889; as the new "base sets" and proceed analogously. Let &#119878; 22 &#8838; &#119878; 12 be a random subset of &#119878; 12 , subject to the constraint (as before) that (a) it includes &#119878; 2 = &#119867; 1 (&#119867; 0 (&#931;)) and (b)</p><p>Defining M 2 to be the shadow oracle of L with respect to (&#8709;, &#119878; 22 , . . . &#119878; 2&#119889; ), one can again apply the O2H lemma to conclude that L</p><p>Generalising the argument above, one sees that the sets &#119878; &#119894; &#119895; constitute a triangular matrix (where the &#119894;-th row corresponds to sets on which M &#119894; outputs &#8869;) </p><p>which clarifies why the argument can only be applied for &#119889; steps (as we expect). To see this, note that at the &#119889;th step, all oracles except the last have been completely revealed (last row). Crucially, the last oracle is blocked at &#119878; &#119889; &#8838; &#119878; &#119889;&#119889; and therefore reveals no information about H (&#931;). If one proceeds with the (&#119889; + 1)-th step, all oracles are revealed and one can no longer argue that the algorithm does not access H (&#931;).</p><p>Observe that so far, we have not used the fact that CodeHashing is classically hard, only that without access to the oracle, the problem cannot be solved. The classical hardness comes into play once BPP computations are allowed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.3">&#119889;-CodeHashing &#8713; QNC BPP</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>&#119889;</head><p>. We now sketch how one goes from arguing &#119889;-CodeHashing &#8713; QNC &#119889; to arguing</p><p>&#119894; denotes a classical algorithm, and &#928; &#119894; denotes a (possibly partial) measurement. The analogous circuit with shadow oracles is denoted by</p><p>The idea, again, is to establish, via a hybrid argument, that the two circuits are close in trace distance. In the QNC &#119889; case, thanks to the depth of the circuit being &#119889;, we were able to argue that any QNC &#119889; algorithm behaves equivalently if we take away its access to H . When trying to argue that a QNC BPP &#119889; algorithm cannot solve the problem, we have to be more careful because the BPP part has sufficient depth to make queries to H . In our argument, this will affect how the shadow oracles M &#119894; are defined.</p><p>In some more detail, we allow the classical algorithm to make "path queries"-which intuitively just means that if &#119867; &#119894; is queried at &#119909; &#119894; , the algorithm also learns (&#119909; 0 , &#119909; 1 . . . &#119909; &#119889; ) such that <ref type="foot">15</ref> &#119909; &#119895;+1 = &#119867; &#119895; (&#119909; &#119895; ) for all &#119895;. This of course can only help the algorithm.</p><p>The key idea is that we account for the "paths" that have been queried classically until depth &#119894; and define M &#119894; to be consistent with those (i.e. it never outputs &#8869; on these paths). As before, we can replace queries to L with queries to M &#119894; that contain no information about H except for the paths which were classically queried. Appealing to the soundness of CodeHashing, such an algorithm cannot succeed. This is because CodeHashing has the property that even an unbounded classical algorithm cannot succeed if it only makes polynomially many queries to the oracle.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.4">&#119889;-CodeHashing</head><p>&#8713; BPP QNC &#119889; . Observe that a poly depth quantum circuit can access H and since a BPP QNC &#119889; circuit has poly many QNC &#119889; circuits, it is not a priori clear that BPP QNC &#119889; cannot also access H . This is why the approach we used to prove that &#119889;-CodeHashing &#8713; QNC &#119889; cannot be applied directly. Crucially, to argue that the problem is not in BPP QNC &#119889; , one must use the fact that the contents of each QNC &#119889; circuit are measured entirely, and that each QNC &#119889; circuit takes only classical inputs. In order to handle the classical information that each QNC &#119889; circuit receives as input, we use a technique called the "sampling argument". In essence, this says that if L has high entropy (which is to say that the oracles being queried are sufficiently random), then conditioned on any string &#119904; correlated with it, the resulting L|&#119904; behaves as a "convex combination" of high entropy distributions with a small fraction of their values completely fixed. This allows us to reduce the analysis to that of a particular set of paths being exposed, which we can handle by proceeding as in the QNC BPP &#119889; case. A similar argument was used by CCL to establish that a problem is not in BPP QNC &#119889; with respect to a (structured) oracle. Their analysis used a sequence of permutation oracles and was simplified by viewing the oracles, equivalently, as distributions over paths (as opposed to a sequence of functions assigning values to individual points). The paths viewpoint was particularly helpful when considering the "sampling argument" (the version we use is derived from <ref type="bibr">[27]</ref>). <ref type="bibr">[11]</ref> showed that such a sampling argument can be obtained for almost any oracle which can be viewed as a distribution over paths. In our setting, since the oracles are random, paths can collide. Thus, one needs to define a suitable notion of "paths" in this setting. We provide more details in the next three paragraphs. However, since these are relatively more technical, one may wish to skip directly to Subsection 2.1.5 on a first read.</p><p>Sampling argument for Permutations. Suppose &#119905; is a permutation over &#119873; elements labelled {0, . . . , &#119873; -1}. This permutation &#119905; is ordinarily viewed as a function, &#119905; (&#119909;) specifying how &#119909; is mapped. However, one could equivalently view &#119905; as a collection of pairs (or tuples later) (&#119909;, &#119910;) such that &#119905; (&#119909;) = &#119910;. We call such a pair a "path". Now consider distributions over permutations. Let's begin with a uniform distribution F over all permutations &#119906;. One may characterise F as follows: for any &#119906; &#8764; F, i.e. any &#119906; sampled from F, it holds that Pr[&#119906; (&#119909;</p><p>We first state a basic version of the sampling argument. To this end, we define a (&#119901;, &#120575;) non-uniform distribution, F (&#119901;,&#120575;) , which is closely related to the uniform distribution F. At a high level, F (&#119901;,&#120575;) is "&#120575; close to" F with at most &#119901; many paths fixed. What does "&#120575; closeness" mean? Let Pr[&#119878; &#8838; paths(&#119906;)] denote the probability that a collection &#119878; of (non-colliding) paths is in &#119906;. Then, for any distribution G (over permutations), a distribution G &#120575; is &#120575; close to it if the following holds: when &#119905; &#8242; &#8764; G &#120575; and &#119905; &#8764; G, one has</p><p>We are almost ready to state the basic sampling argument. We need the notion of a "convex combination" of random variables. We say a random variable (such as our permutation) &#119905; is a convex combination of random variables &#119905; &#119894; , denoted by &#119905; &#8801; &#119894; &#120572; &#119894; &#119905; &#119894; (where</p><p>Informally, the basic sampling argument is a statement about a uniform permutation &#119906; &#8764; F and how the distribution F changes if we are given some "advice" about this permutation which is simply a function &#119892;(&#119906;). Roughly speaking, given that &#119892;(&#119906;) evaluates to &#119903; with probability at least 2 -&#119898; , the distribution F conditioned on &#119903; is a convex combination <ref type="foot">16</ref> of F (&#119901;,&#120575;) distributions where the number of paths fixed is at most &#119901; = 2&#119898;/&#120575;. Here &#120575; is a free parameter. We slightly abuse the notation and write this basic sampling argument as F|&#119903; &#8801; conv(F (&#119901;,&#120575;) ).</p><p>If we view &#119892;(&#119906;) as the output of the first quantum part of the circuit for BPP QNC &#119889; , and &#119906; as the oracle of interest (details are in the next section), it is suggestive that &#119906; |&#119892;(&#119906;) will be the oracle for the second quantum part of the circuit. We can use the sampling argument above and re-use our analysis because F and F (&#119901;,&#120575;) have very similar statistical properties. However, it is unclear how to use the sampling argument thereafter as the basic sampling argument seems to only apply to F (and not to F (&#119901;,&#120575;) ). It turns out that one can extend the sampling argument to obtain</p><p>Consequently, if the procedure is successively applied &#241; &#8804; poly(&#119899;) times (starting with F), the convex combination would be over distributions of the form F ( &#241;&#119901;, &#241;&#120575;) . The parameters can be appropriately chosen to ensure that at most polynomially many paths are exposed but we omit the details in this overview.</p><p>Sampling argument for Injective Shufflers. The proofs of the previously mentioned statements do not rely on any special property of the distribution F nor do they depend on the fact that we were considering permutations. Any object for which we can describe a "reasonable" notion of "paths" admits such a sampling argument. Therefore, as we did for permutations, to describe the sampling argument, we change our viewpoint and consider "paths" in L = (&#119867; 0 , . . . &#119867; &#119889; ) instead of individual values taken by the &#119867; &#119894; 's. Recall that a "path" was a tuple of the form (&#119909; 0 , &#119909; 1 . . . ) such that</p><p>This viewpoint is inadequate for capturing the probabilistic behaviour of L due to two reasons (which are not hard to rectify). First, since &#119867; 0 : &#931; &#8594; &#931; &#119889; &#8242; , it is clear that at least &#931; &#119889; &#8242; -1 many points will never be contained in any "path" as described above. Therefore the behaviour of most points in &#119867; &#119894; (for &#119894; &#8712; {1 . . . &#119889; }) will not be captured by the "paths" viewpoint. Second, even though &#119867; &#119894; maps &#931; &#119889; &#8242; &#8594; &#931; &#119889; &#8242; for &#119894; &#8712; {1, . . . &#119889; -1}, &#119867; &#119894; may not be injective and therefore the paths might collide, which again would mean the behaviour of many points would not be captured by the "paths" viewpoint.</p><p>To rectify the second issue, we can select base sets (&#119878; 01 , . . . &#119878; 0&#119889; ) =: S0 and condition on the event &#119864;. Since in our proofs, we only care about the behaviour of L on S0 , it suffices to restrict our attention to S0 . Recall that L|&#119864; behaves as a permutation on S0 . Therefore no "path" inside S0 collides. To rectify the first issue, we consider two kinds of paths-Type 0 paths and Type 1 paths. <ref type="foot">17</ref> A Type 0 path is what we described earlier: a tuple of the form (&#119909; 0 , &#119909; 1 . . . ) such that &#119909; &#119894; = &#119867; &#119894;-1 (&#119909; &#119894;-1 ) for all &#119894;. A Type 1 path is a tuple of the form</p><p>Observe that, restricted to S0 and conditioned <ref type="foot">18</ref> on &#119864;, we have the following equivalence: given Pr[&#119867; &#119894; (&#119909;) = &#119909; &#8242; ] for all &#119894;, &#119909; and &#119909; &#8242; , one can compute the probability associated with both types of paths and conversely, given probabilities associated with the paths, one can compute Pr[&#119867; &#119894; (&#119909;) = &#119909; &#8242; ] for all &#119894;, &#119909; and &#119909; &#8242; .</p><p>As is evident, working with L directly is cumbersome and we therefore define a simpler object, the injective shuffler. Fix sets</p><p>&#119894; : &#119878; 0&#119894; &#8594; &#119878; 0,&#119894;+1 for all &#119894; &#8712; {1, . . . &#119889; -1} be injective functions and let &#119867; &#8242; &#119889; : &#119878; 0&#119889; &#8594; {0, 1} &#119899; &#8746; {&#8869;} (which may not be injective) such that &#119867; &#8242; &#119889; outputs &#8869; for all paths originating from &#931; (and no other). 19  We define the injective shuffler, K as (&#119867; &#8242; 0 , . . . &#119867; &#8242; &#119889; ). Think of K as a simpler way to denote the relevant object associated with L|&#119864;. What do we mean by the relevant object-not only is it injective, it also never reveals any information 20 about the values taken by H in &#931;. As alluded to at the beginning of this subsection, since the strings &#119904; &#119894; arise from quantum parts which only get access to L via shadow oracles, the sampling argument only needs to be applied to parts of L outside of paths in H .</p><p>To state the sampling argument for the injective shuffler, we define (&#119901;, &#120575;) non-&#120573;-uniform distributions F (&#119901;,&#120575;) |&#120573; inj for the injective shuffler (analogous to the way we defined them for permutations). We begin with the uniform distribution-it is simply a distribution which assigns equal probabilities to all the possible injective shufflers, given the sets (&#119878; 0&#119894; ) &#119894; . As for &#120573;-uniform distributions, F |&#120573; inj , we first need to define the "paths", &#120573;. Here, &#120573; will again be a set of "non-colliding paths" but formalising this requires some care (details in the full version <ref type="bibr">[10]</ref>). Then a &#120573;-uniform distribution is the same as the uniform distribution except that the paths in &#120573; are fixed. Omitting further details, one can define F (&#119901;,&#120575;) |&#120573; to be a distribution which is "&#120575; close to" the &#120573;-uniform distribution with at most &#119901; many paths fixed (in addition to &#120573;).</p><p>The sampling argument for injective shufflers is the following. Suppose we start with &#119905; &#8764; F &#120575; &#8242; |&#120573; inj (i.e. a distribution which is "&#120575; &#8242; close to" &#120573;-uniform) and are given some advice &#8462;(&#119905;) which happens to be &#119903; with probability at least 2 -&#119898; . Then the distribution F &#120575; &#8242; |&#120573; inj conditioned on &#119903; is, roughly speaking, a convex combination 21  of F (&#119901;,&#120575;+&#120575; &#8242; ) |&#120573; inj distributions where the number of paths fixed (in addition to &#120573;) is at most &#119901; = 2&#119898;/&#120575; and &#120575; again is a free parameter. Using the previous shorthand, we have</p><p>Stitching everything together. As asserted before we described the sampling argument, one can replace all the oracles L in the quantum part of the circuit for BPP QNC &#119889; with appropriate shadow oracles. Let M 11 , . . . M 1&#119889; denote the shadow oracles for the first quantum part, M 21 . . . M 2&#119889; for the second quantum part and so on. Suppose the paths queried by the &#119894;th classical part were &#120573; &#119894; , the string outputted by the &#119894;th quantum part be &#119904; &#119894; . Suppose M 11 . . . M 1&#119889; . . . M &#119894;-1,1 . . . M &#119894;-1,&#119889; have been specified. Now, conditioned on &#119904; &#119894; , the sampling argument says L|&#119904; &#119894; behaves as a convex combination of injective shufflers with certain paths exposed, when restricted to base sets. Let &#120573; (&#119904; &#119894; ) be the random variable which specifies these paths and occurs with the weights specified in the convex combination. One can define M &#119894;1 . . . M &#119894;&#119889; as in the QNC &#119889; case, ensuring the paths &#120573; 1 . . . &#120573; &#119894;-1 and &#120573; (&#119904; 1 ) . . . &#120573; (&#119904; &#119894;-1 ) have been exposed. Note crucially that &#119904; &#119894; is obtained by a quantum part which only had 19 i.e. &#119867; &#8242; &#119889; (&#119909; &#119889; ) =&#8869; iff (&#119909; 0 , &#119909; 1 , . . . &#119909; &#119889; , &#119909; &#119889;+1 ) is a Type 0 path (therefore &#119909; &#119889;+1 =&#8869;). 20 Except for polynomially possibly many paths exposed by classical queries; we handle these shortly. 21 Again, neglecting a component with weight at most 2 -&#119898; . access to Lvia shadow oracles so it does not change the distribution over H (except for polynomially many paths which were already exposed, &#120573; 1 . . . &#120573; &#119894;-1 and &#120573; (&#119904; 1 ) . . . &#120573; (&#119904; &#119894;-1 )). Using a hybrid argument as in the QNC &#119889; case, and using properties of the injective shuffler which is "&#120575; close" to being uniform, one can apply the O2H lemma and conclude that the hybrids (again, defined as in the QNC &#119889; case) are close in trace distance. Eventually, this yields that the initial circuit is close in trace distance to the circuit which only accesses L via the shadows M 11 . . . M 1&#119889; . . . M &#119898;1 . . . M &#119898;&#119889; in the quantum part (denote the number of quantum parts by &#119898; &#8804; poly(&#120582;)). The latter circuit cannot solve &#119889;-CodeHashing again, because H is only accessed by the classical parts of this circuit. More precisely, H is only queried at at most |&#120573; 1 &#8746; . . . &#120573; &#119898; &#8746; &#120573; (&#119904; 1 ) &#8746; . . . &#120573; (&#119904; &#119898; )| &#8804; poly(&#120582;) locations and therefore the whole circuit can be simulated while only making polynomially many classical queries to H . From the soundness of CodeHashing, this entails &#119889;-CodeHashing cannot be solved.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.5">&#119889;-CodeHashing &#8713; BPP QNC BPP</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>&#119889;</head><p>. Just as the analysis of the BPP QNC &#119889; case built on the QNC &#119889; case, one can analyze the BPP QNC BPP &#119889; case by building on the QNC BPP &#119889; case. While the high level idea stays the same, the details are more involved. This is partly because, in the QNC &#119889; case, one could construct the shadow oracles M 1 . . . M &#119889; "all at once" since we were assuming the "worst case", i.e. the quantum algorithm learns everything there is to learn from the shadow oracles. However, in the QNC BPP &#119889; case, to define M &#119894; , one had to know the behaviour of the classical algorithms in the hybrid circuits which involved M 1 . . . M &#119894;-1 (in particular one has to know the "paths" that have been exposed). We show how one can account for this, but we leave the details to the main body.</p><p>2.1.6 Proof of Quantum Depth. In this subsection, we discuss how our complexity-theoretic separations also yield protocols for certifying quantum depth, i.e. proofs of quantum depth, in a way that is insensitive to classical polynomial depth. First, let us be a bit more precise about what we mean by proof of quantum depth. Definition (informal). A proof of &#119889; quantum depth is a twomessage protocol involving two parties, a verifier and a prover. Both parties are assumed to have access to the random oracle &#119867; . The verifier is a PPT machine. The protocol satisfies the following, where &#120582; is the security parameter.</p><p>&#8226; Completeness: There is a prover in BQP which makes the verifier accept with probability 1 -negl(&#120582;). &#8226; Soundness: No prover in BPP QNC BPP &#119889; makes the verifier accept with probability more than negl(&#120582;). Let &#119889; be at most a fixed polynomial. Since &#119889;-CodeHashing is in NP, it immediately yields a proof of &#119889; quantum depth.</p><p>We conclude this discussion by illustrating the subtlety of considering proofs of quantum depth with more than two messages. Consider the following protocol.</p><p>Example 12. The verifier, Alice, prepares BB84 states |&#119887; &#119894; &#10217; &#120579; &#119894; := &#119867; &#120579; &#119894; |&#119887; &#119894; &#10217; (&#119887; &#119894; , &#120579; &#119894; are both chosen uniformly at random) for &#119894; &#8712; {1, . . . &#119899;} where &#119867; is the Hadamard operation (not to be confused with the random oracle). She sends them all to the prover, Bob.</p><p>Alice and Bob then engage in an &#119899; round protocol.  (1) . We obtain the above by instantiating our lifting procedure, &#119889;-Rec[&#8226;], with a variant of the proof of quantumness from <ref type="bibr">[17]</ref>, which we refer to as CollisionHashing. It is straightforward to show that CollisionHashing also satisfies classical query soundness by using the main argument in <ref type="bibr">[17]</ref> and the query lower bound for finding collisions proved in <ref type="bibr">[4]</ref>.</p><p>Let &#119892; be a 2 &#8594; 1 function for which it is hard to find a collision. Then, the (slightly simplified) problem is to produce a pair (&#119910;, &#119903; ) such that &#119903; &#8226; (&#119909; 0 &#8853; &#119909; 1 ) &#8853; &#119867; (&#119909; 0 ) &#8853; &#119867; (&#119909; 1 ) = 0 where {&#119909; 0 , &#119909; 1 } &#8712; &#119892; -1 (&#119910;). This problem can be solved in QNC O(1) (assuming that calls to &#119892; take only depth 1) by preparing the superposition &#119909; |&#119892;(&#119909;)&#10217; |&#119909;&#10217;, measuring the second register in the standard basis, and the first in the Hadamard basis (detailed later).</p><p>We said simplified because in CollisionHashing, &#119892; is in fact a uniformly random function &#119892; (treated as an oracle) with a domain twice as large as the co-domain. Note that this is not a 2 &#8594; 1 function in general. However, with overwhelming probability, a constant fraction of the elements in the co-domain has exactly two pre-images. Then, we require a pair (&#119910;, &#119903; ) such that either &#119910; has exactly two pre-images and (&#119910;, &#119903; ) satisfies the "equation", or &#119910; does not have exactly two pre-images. The limitation of CollisionHashing is that solutions to the problem are not verifiable, so the problem cannot be used to obtain a fine-grained proof of quantum depth. Denote by &#119877; &#119867; the set of solutions to P (defined with respect to &#119867; ). Then, the key idea is simple. The problem &#119889;-Ser[P] is to return a tuple (&#119888; 0 , &#119888; 1 , . . . , &#119888; &#119889; ) such that: &#119888; 0 is a solution to P, i.e.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Separations</head><p>, and similarly until &#119888; &#119889; , which should be such that</p><p>To be a bit more concrete, take P to be CollisionHashing. We know CollisionHashing &#8712; QNC O(1) . Clearly, 1) . This is because BPP QNC O(1) allows one to run polynomially many QNC O(1) circuits. Consequently, one can use the first circuit to obtain the classical output &#119888; 0 , use the second circuit to find &#119888; 1 and so on. On the other hand, intuitively, we expect that &#119889;-Ser[CollisionHashing] &#8713; QNC BPP &#119889; . This is because to solve the (&#119894; +1)-th sub-problem, one seems to require the solution to all of the previous &#119894; sub-problems. Since there are &#119889; + 1 sub-problems in total, QNC BPP &#119889; does not seem to suffice (here of course we are implicitly using the fact that P &#8713; BPP). Formally, the argument proceeds in a similar way as for the lifting map &#119889;-Rec in Subsection 2.1.3, except for one subtlety which is handled by requiring that the problem P satisfies the extra property of offline soundness. We refer the reader to the main text for more details. We remark that offline soundness follows from classical query soundness and therefore both CollisionHashing and CodeHashing satisfy it.</p><p>The immediate consequence of the existence of the lifting map </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.2">Establishing QNC BPP</head><p>O(1) &#8840; BPP QNC &#119889; . This is the more surprising of the two hybrid separations, and its proof is more involved. In this section, we fix &#119889; &#8804; &#119901;&#119900;&#119897;&#119910; (&#120582;). The problem that yields this separation is the following variation on CollisionHashing: given access to a 2-to-1 function &#119892; 23 , and to &#119867; 0 , . . . &#119867; &#119889; (which specify &#8462; as &#8462; = &#119867; &#119889; &#8226; &#8226; &#8226; &#8226; &#8226; &#119867; 0 ), find a pair (&#119910;, &#119903; ) such that &#119903; &#8226; (&#119909; 0 &#8853; &#119909; 1 ) &#8853; &#119867; (&#8462;(&#119910;)||&#119909; 0 ) &#8853; &#119867; (&#8462;(&#119910;)||&#119909; 1 ) = 0 , where {&#119909; 0 , &#119909; 1 } = &#119892; -1 (&#119910;). We refer to the new problem as &#119889;-hCollisionHashing.</p><p>Without relying on &#8462; (that is, requiring that the equation to be satisfied is just &#119903; &#8226; (&#119909; 0 &#8853; &#119909; 1 ) &#8853; &#119867; (&#119909; 0 ) &#8853; &#119867; (&#119909; 1 ) = 0), this problem is the same as CollisionHashing. This can be solved in QNC O(1) as follows:</p><p>(i) Evaluate &#119892; on a uniform superposition of inputs, obtaining &#119909; |&#119909;&#10217; |&#119892;(&#119909;)&#10217;, (ii) Measure the image register obtaining some outcome &#119910; and a state (|&#119909; 0 &#10217; + |&#119909; 1 &#10217;) |&#119910;&#10217;, (iii) Query a phase oracle for &#119867; to obtain ((-1) &#119867; (&#119909; 0 ) |&#119909; 0 &#10217; + (-1) &#119867; (&#119909; 1 ) |&#119909; 1 &#10217;) |&#119910;&#10217;, (iv) Make a Hadamard basis measurement of the first register, obtaining outcome &#119903; .</p><p>At a high level, in order to solve the new problem, which includes the evaluation of &#8462; as an input to &#119867; , one needs the ability to perform a (classical) depth &#119889; computation to evaluate &#8462;(&#119910;) (since this requires the sequential evaluations of &#119867; 0 , . . . , &#119867; &#119889; ). Note that a QNC BPP algorithm can solve this problem: the only modification to the algorithm described above is that, at step (iii), the algorithm first computes &#8462;(&#119910;) (using polynomial classical computation), and then queries the oracle &#119867; on a superposition of (&#8462;(&#119910;), &#119909; 0 ) and (&#8462;(&#119910;), &#119909; 1 ). One can easily verify that this leads to a valid (&#119910;, &#119903; ) for the problem.</p><p>Next, we sketch how one can argue that the problem cannot be solved in BPP QNC . The key technical ingredient is a "structure theorem" that characterizes the structure of efficient quantum strategies that are successful at CollisionHashing. Our structure theorem applies equally to the proof of quantumness protocol from <ref type="bibr">[17]</ref> (recall that the latter is just a version of collision hashing where &#119892; is replaced by a 2-to-1 trapdoor claw-free function).</p><p>Theorem 14 (informal). Let &#119875; be any BQP prover that succeeds with 1 -negl(&#119899;) probability at the proof of quantumness protocol from <ref type="bibr">[17]</ref>, by making &#119902; queries to the oracle &#119867; . Then, with 1 -negl(&#119899;) probability over pairs (&#119867;, &#119910;), the following holds. Let &#119901; &#119910; |&#119867; be the probability that &#119875; &#119867; outputs &#119910;, and let &#119909; 0 ,&#119909; 1 be the pre-images of &#119910;. Then, for all &#119887; &#8712; {0, 1}, there exists &#119894; &#8712; [&#119902;] such that the state of the query register of &#119875; &#119867; right before the &#119894;-th query has weight 1 2 &#119901; &#119910; |&#119867; &#8226; (1 -negl(&#119899;)) on &#119909; &#119887; . 23 Since we want our problem to be relative to a uniformly random oracle, in the formal description of the problem in the main text, we will not assume that &#119892; is exactly 2-to-1. Rather we will take &#119892; to be a uniformly random function with domain twice as large as the co-domain, and simply restrict our attention to &#119910;'s in the co-domain that have exactly two pre-images (this is a constant fraction of the elements of the co-domain with overwhelming probability).</p><p>See the full version <ref type="bibr">[10]</ref> for a formal statement of this result. This is a crucial strengthening of a Theorem from <ref type="bibr">[26]</ref>, and employs the compressed oracle technique <ref type="bibr">[44]</ref>. A slight adaptation of this to our problem asserts that a successful strategy must be querying the random oracle &#119867; at a (close to) uniform superposition of (&#8462;(&#119910;), &#119909; 0 ) and (&#8462;(&#119910;), &#119909; 1 ). Now let &#119860; be a BPP QNC algorithm that succeeds at &#119889;-hCollisionHashing with high probability and let &#119902; be the total number of queries to &#8462; made by the algorithm. Then, one can show that, since the QNC part of the algorithm does not have sufficient depth to evaluate &#8462; (which is a sequential evaluation of &#119867; 0 , . . . , &#119867; &#119889; ), we can assume, without loss of generality, the QNC part of &#119860; has no access to &#8462;. In other words, all of the queries to &#8462; are classical. Now, Theorem 14 says essentially that, for any &#119910;, the only way to succeed with high probability (conditioned on that &#119910; being the output) is to query (with as much weight as the probability of outputting &#119910;) a uniform superposition of (&#8462;(&#119910;), &#119909; 0 ) and (&#8462;(&#119910;), &#119909; 1 ). However, observe that, for any &#119910;, the only way for &#119860; to query &#119867; (with a high weight) at a uniform superposition of (&#8462;(&#119910;), &#119909; 0 ) and (&#8462;(&#119910;), &#119909; 1 ) is to correctly guess the value of &#8462;(&#119910;). Since this value is uniformly random for any algorithm that has not queried &#8462; at &#119910;, it follows that querying &#119867; at the uniform superposition of (&#8462;(&#119910;), &#119909; 0 ) and (&#8462;(&#119910;), &#119909; 1 ) must necessarily happen after the algorithm has already queried &#8462; on &#119910;.</p><p>This implies that there must exist an &#119894; * &#8712; [&#119902;] such that, with high probability, &#119860; outputs &#119910;, &#119903; such that &#119910; is contained in the list of classical queries made to &#8462; up to the &#119894; * -th query. Denote such a list by &#119871; &#119894; * . Moreover, with high probability over &#119871; &#119894; * , the continuation of &#119860; (from that point on) queries &#119867; at a uniform superposition of (&#8462;(&#119910;), &#119909; 0 ) and (&#8462;(&#119910;), &#119909; 1 ) for some &#119910; &#8712; &#119871; &#119894; * . We show that such an algorithm &#119860; can be leveraged to extract a collision for &#119892;.</p><p>The key observation is that, since &#119860; is a BPP QNC algorithm, and all of the queries to &#8462; happen in the BPP portion of &#119860;, the "state" of algorithm &#119860; right after the &#119894; * -th query to &#8462; is entirely classical. Thus, one can take a "snapshot" of the state of &#119860; at that point (i.e. copy it), and simply run two independent executions of &#119860; from that point on (with independent classical randomness). By what we argued earlier, with high probability, there exists &#119910; &#8712; &#119871; &#119894; * , such that the execution of &#119860; from that point on, queries &#119867; at a uniform superposition of (&#8462;(&#119910;), &#119909; 0 ) and (&#8462;(&#119910;), &#119909; 1 ). Since the two executions are identical and independent, it follows that measuring the query registers of &#119867; in both executions will yield distinct pre-images of &#119910; with significant probability.</p><p>Finding collisions of &#119892; is of course hard (for any query-bounded quantum algorithm) <ref type="bibr">[4]</ref>. Hence, this yields a contradiction. For details, we refer the reader to the full version <ref type="bibr">[10]</ref>. </p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0"><p>Technically<ref type="bibr">[35]</ref> only shows the strict containment NC &#8842; P, relative to a random oracle. However, the quantum version QNC &#8842; BQP can also be shown as a straightforward extension of that result. That containment also follows from<ref type="bibr">[24]</ref>.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1"><p>Note that here and throughout the paper, the QNC oracle can output a string, unlike a decision oracle which outputs a bit.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2"><p>Note that the BPP oracle is not invoked coherently. Instead, it is invoked on outcomes resulting from intermediate measurements performed in the layers of the QNC d circuit.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3"><p>In fact in QNC 0 , since queries to the oracle are assumed to have depth 1.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4"><p>2 messages in total or a 1 round protocol.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5"><p>Here, as well as in all subsequent results, the statements hold with probability 1 over the choice of the random oracle. In addition, queries to the oracle are viewed as having depth 1 (discussed later).</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_6"><p>We remark that, if one is only concerned with the complexity-theoretic separation of Theorem 1, and not with efficient verification, then a much simpler problem (CollisionHashing described later) suffices.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_7"><p>Which we refer to as CollisionHashing later.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_8"><p>&#119889;-Rec[ &#8226;] is meant to be short for &#119889;-Recursive.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_9"><p>We sometimes refer to this fact by saying that the function is "expanding".</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="15" xml:id="foot_10"><p>Two caveats: (1) &#119867; 0 : &#931; &#8594; &#931; &#119889; &#8242; therefore some of the paths will not have well defined first components and (2) we only care about queries made inside the base sets where conditioned on &#119864;, &#119867; 1 . . . &#119867; &#119889;-1 behave as permutations.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="16" xml:id="foot_11"><p>In the convex combination, there is a small component, of weight at most 2 -&#119898; , of some arbitrary distribution.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="17" xml:id="foot_12"><p>The 0 and 1 represent where the first non-&#8990;&#8991; component sits.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="18" xml:id="foot_13"><p>Recall, &#119864; is the event that the oracles &#119867; 0 and &#119867; 1 . . . &#119867; &#119889; are injective on &#931; and S0 resp.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="22" xml:id="foot_14"><p>While we used quantum communication in the protocol, one could (using known results) delegate the production of these states to the prover (under computational assumptions) and run a similar protocol using classical communication.</p></note>
		</body>
		</text>
</TEI>
