<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Parallel Repetition for the GHZ Game: A Simpler Proof</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>2021</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10310086</idno>
					<idno type="doi">10.4230/LIPIcs.APPROX/RANDOM.2021.62</idno>
					<title level='j'>Leibniz international proceedings in informatics</title>
<idno>1868-8969</idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Uma Girish</author><author>Justin Holmgren</author><author>Kunal Mittal</author><author>Ran Raz</author><author>Wei Zhan</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[We give a new proof of the fact that the parallel repetition of the (3-player) GHZ game reduces the value of the game to zero polynomially quickly. That is, we show that the value of the n-fold GHZ game is at most n -Ω(1) . This was first established by Holmgren and Raz [18]. We present a new proof of this theorem that we believe to be simpler and more direct. Unlike most previous works on parallel repetition, our proof makes no use of information theory, and relies on the use of Fourier analysis.The GHZ game [15] has played a foundational role in the understanding of quantum information theory, due in part to the fact that quantum strategies can win the GHZ game with probability 1. It is possible that improved parallel repetition bounds may find applications in this setting. Recently, Dinur, Harsha, Venkat, and Yuen [7]  highlighted the GHZ game as a simple three-player game, which is in some sense maximally far from the class of multi-player games whose behavior under parallel repetition is well understood. Dinur et al. conjectured that parallel repetition decreases the value of the GHZ game exponentially quickly, and speculated that progress on proving this would shed light on parallel repetition for general multi-player (multi-prover) games.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The focus of this paper is multi-player games, and in particular their asymptotic behavior under parallel repetition. Multi-player games consist of a one-round interaction between a referee and k players. In this interaction, the referee first samples a "query" (q 1 , . . . , q k ) from some joint query distribution Q, and for each i sends q i to the i th player. The players are required to respectively produce "answers" a 1 , . . . , a k without communicating with one another (that is, each a i is a function only of q i ) and they are said to win the game if (q 1 , . . . , q k , a 1 , . . . , a k ) satisfy some predicate W that is fixed and associated with the game.</p><p>Suppose that a game G has the property that the maximum probability with which players can win is 1 -&#1013;, no matter what strategy they use. This quantity is called the value of G. The parallel repetition question <ref type="bibr">[13]</ref> asks How well can the players concurrently play in n independent copies of G? More precisely, consider the following k-player game, which we call the n-wise parallel repetition of G and denote by G n : 1. The referee samples, for each i &#8712; [n] independently, query tuples (q i 1 , . . . , q i k ) &#8764; Q. We refer to the index i as a coordinate of the parallel repeated game. 2. The j th player is given (q 1 j , . . . , q n j ) and is required to produce a tuple (a 1 j , . . . , a n j ). 3. The players are said to win in coordinate i if (q i 1 , . . . , q i k , a i 1 , . . . , a i k ) satisfies W . They are said to win (without qualification) if they win in every coordinate i &#8712; [n].</p><p>One might initially conjecture that the value of G n is (1 -&#1013;) n . However, this turns out not to be true <ref type="bibr">[14,</ref><ref type="bibr">9,</ref><ref type="bibr">12,</ref><ref type="bibr">25]</ref>, as players may benefit from correlating their answers across different coordinates. Still, Raz showed that if G is a two-player game, then the value of G n is 2 -&#8486;(n) , where the &#8486; hides a game-dependent constant <ref type="bibr">[23,</ref><ref type="bibr">17]</ref>. Tighter results, based on the value of the initial game are also known <ref type="bibr">[8,</ref><ref type="bibr">5]</ref>. For many applications, such bounds are qualitatively as good as the initial flawed conjecture.</p><p>Games involving three or more players have proven more difficult to analyze, and the best known general bound on their parallel repeated value is due to <ref type="bibr">Verbitsky [26]</ref>. This bound states that the value of G n approaches 0, but the bound is extremely weak (it shows that the value is at most 1 &#945;(n) , where &#945; denotes an inverse Ackermann function). The weakness of this bound is generally conjectured to reflect limitations of current proof techniques rather than a fundamental difference in the behavior of many-player games. In the technically incomparable but related no-signaling setting however, Holmgren and Yang showed that three-player games genuinely behave differently than two-player games <ref type="bibr">[19]</ref>. Specifically, they showed that there exists a three-player game with "no-signaling value" bounded away from 1 such that no amount of parallel repetition reduces the no-signaling value at all.</p><p>Parallel repetition is a mathematically natural operation that we find worthy of study in its own right. At the same time, parallel repetition bounds have found several applications in theoretical computer science (see this survey by <ref type="bibr">[24]</ref>). For example, parallel repetition of 2 player games shares intimate connections with multi-player interactive proofs <ref type="bibr">[4]</ref>, probabilistically checkable proofs and hardness of approximation <ref type="bibr">[3,</ref><ref type="bibr">10,</ref><ref type="bibr">16]</ref>, geometry of foams <ref type="bibr">[11,</ref><ref type="bibr">20</ref>, 1], quantum information <ref type="bibr">[6]</ref>, and communication complexity <ref type="bibr">[22,</ref><ref type="bibr">2]</ref>. Recent work also shows that strong parallel repetition for a particular class of multiprover games implies new time lower bounds on Turing machines that can take advice <ref type="bibr">[21]</ref>.</p><p>Dinur et al. <ref type="bibr">[7]</ref> describe a restricted class of multi-player games for which Raz's approach generalizes (giving exponential parallel bounds). Specifically, they consider games whose query distribution satisfies a certain connectivity property. For games outside this class, Verbitsky's bound was the best known. Dinur et al. highlighted one simple three-player game, called the GHZ game <ref type="bibr">[15]</ref>, that in some sense is maximally far from the aforementioned tractable class of multi-player games. In the GHZ game, the players' queries are (q 1 , q 2 , q 3 ) chosen uniformly at random from {0, 1} 3 such that q 1 &#8853; q 2 &#8853; q 3 = 0, and the players' goal is to produce (a 1 , a 2 , a 3 ) such that a 1 &#8853; a 2 &#8853; a 3 = q 1 &#8744; q 2 &#8744; q 3 . Dinur et al. conjectured that parallel repetition decreases the value of the GHZ game exponentially quickly, and speculated that progress on proving this would shed light on parallel repetition for general games. The GHZ game has also played a foundational role in the understanding of quantum information theory, due in part to the fact that quantum strategies can win the GHZ game with probability 1. It is possible that improved parallel repetition bounds will find applications in this setting as well.</p><p>In a recent work, Holmgren and Raz <ref type="bibr">[18]</ref> proved the following polynomial upper bound on the parallel repetition of the GHZ game:</p><p>&#9654; Theorem 1. The value of the n-wise repeated GHZ game is at most n -&#8486;(1) .</p><p>Our main contribution is a different proof of this theorem that, in our view, is significantly simpler and more direct than the proof of <ref type="bibr">[18]</ref>. Like <ref type="bibr">[18]</ref>, we actually do not rely on any properties of the GHZ game other than its query distribution, and in particular we do not rely on specifics of the win condition. Furthermore, unlike most previous works on parallel repetition, our proof makes no use of information theory, and instead relies on the use of Fourier analysis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Technical Overview</head><p>Let P denote the distribution of queries in the n-wise parallel repeated GHZ game. Let &#945; = &#920;(1/n &#949; ) for a small constant &#949; &gt; 0 and E = E 1 &#215; E 2 &#215; E 3 be any product event with significant probability under P, i.e., P(E) &#8805; &#945;. The core of our proof is establishing that for a random coordinate i &#8712; [n], the query distribution P|E (P conditioned on E) is mildly hard in the i th coordinate. That is, given queries sampled from P|E, the players' maximum winning probability in the i th coordinate is bounded away from 1. Using standard arguments from the parallel repetition literature, this will imply an inverse polynomial bound for the value of the n-fold GHZ game. The difficulty, as usual, is that the n different queries in P|E may not be independent.</p><p>Our approach at a high level is to: 1. Identify a class D of simple distributions (over queries for the n-wise repeated GHZ game) such that it is easy to analyze (in step 3 below) which coordinates are hard for any given D &#8712; D. By hard, we mean that the players' maximum winning probability in the i th coordinate is 3 4 . 2. Approximate P|E by a convex combination of distributions from D. That is, we write</p><p>where {D j } are distributions in D, p j are non-negative reals summing to 1, and &#8776; denotes closeness in total variational distance. 3. Show that in the above convex combination, "most" of the D i have many hard coordinates.</p><p>More precisely, if we sample j with probability p j , then the expected fraction of coordinates in which D j is hard is at least a constant (say 1/3). Completing this approach implies that if i &#8712; [n] is uniformly random, then the i th coordinate of P|E can be won with probability at most 1 -&#8486;(1). We elaborate on each of these steps below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>62:4</head><p>Parallel Repetition for the GHZ Game: A Simpler Proof</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Bow Tie Distributions</head><p>For our class of "simple" distributions D, we introduce the notion of a "bow tie" distribution. We then define D to be the set of all bow tie distributions. A bow tie is a set B of the form</p><p>such that for each (x, y, z) in B, we have x + y + z = 0. In particular this requires that</p><p>A bow tie distribution is the uniform distribution on a bow tie. Our name of "bow tie" is based on the fact that bow ties are thus determined by {(x 0 , y 0 ), (x 0 , y 1 ), (x 1 , y 0 ), (x 1 , y 1 )}, which we sometimes view as a set of edges in a graph. In this case, bow ties are special kinds of K 2,2 subgraphs, where K 2,2 denotes the complete bipartite graph.</p><p>The main property of a bow tie distribution D is that for every coordinate i for which (x 0 ) i &#824; = (x 1 ) i (equivalently (y 0 ) i &#824; = (y 1 ) i , or (z 0 ) i &#824; = (z 1 ) i ), the i th coordinate of D is as hard as the GHZ game (i.e. players cannot produce winning answers for the i th coordinate with probability more than 3  4 ). This follows by "locally embedding" the (unrepeated) GHZ query distribution into the i th coordinate of D as follows. We first swap x 0 &#8596; x 1 , y 0 &#8596; y 1 , z 0 &#8596; z 1 as necessary to ensure that</p><p>An even number of swaps are required to do this by the assumption that x 0 + y 0 + z 0 = 0, and bow ties are invariant under an even number of such swaps. Thus Equation (1) is without loss of generality. Suppose f1 , f2 , f3 : F n 2 &#8594; F 2 comprise a strategy for the i th coordinate of D. Then a strategy f 1 , f 2 , f 3 : F 2 &#8594; F 2 for the basic (unrepeated) GHZ game can be constructed as</p><p>The winning probability of this strategy is the same as the winning probability of f1 , f2 , f3 in the i th coordinate because (x b1 ) i , (y b2 ) i , (z b3 ) i = (b 1 , b 2 , b 3 ). Hence both probabilities are at most 3/4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Approximating P|E by Bow Ties</head><p>We now sketch how to approximate P|E by a convex combination of bow tie distributions, where E is a product event E 1 &#215; E 2 &#215; E 3 . We assume for now that the non-zero Fourier coefficients of each E j are small. We will return to this assumption at the end of the overview -it turns out to be nearly without loss of generality.</p><p>We show that P|E is close in total variational distance to the distribution obtained by sampling a uniformly random bow tie B &#8838; E, and then outputting a random element of B. The latter distribution is equivalent to sampling (x, y, z) with probability proportional to the number of bow ties B &#8838; E that contain (x, y, z). This number is</p><p>where we identify E 1 , E 2 , and E 3 with their indicator functions. Note that we are subtracting 1 to cancel the term corresponding to z &#8242; = z.</p><p>Intuitively, the fact that all E j have small Fourier coefficients means that they look random with respect to linear functions. Thus, one might guess that the above sum is close to 2 n &#8226; &#181;(E 1 )&#181;(E 2 )&#181;(E 3 ) for most (x, y, z) &#8712; supp(P|E), where &#181;(S) = |S|/2 n denotes the measure of S under the uniform distribution on F n 2 . If "close to" and "most" have the right meanings, then this would imply that our distribution is close in total variational distance to P|E as desired.</p><p>Our full proof indeed establishes this. More precisely, we view Equation (2) as a vector indexed by (x, y, z) and establish bounds on that vector's &#8467; 1 and &#8467; 2 norms as a criterion for near-uniformity. In the process our proof repeatedly uses the following claims (see <ref type="bibr">Lemma 16)</ref>. For all sets S, T &#8838; F n 2 that are sufficiently large, we have</p><p>and</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Most Bow Ties are Hard in Many Coordinates</head><p>For the final step of our proof, we need to show that the distribution of bow ties analyzed in the previous step produces (with high probability) bow ties that differ in many coordinates. We begin by parameterizing a bow tie by (x 0 , y 0 , x 0 &#8853; x 1 ) and noting that in the previous step, we essentially showed that E contains 2 3n-O(log n) different bow ties. The O(log n) term in the exponent arises from the fact that the events {E j } have density in F n 2 that is inverse polynomial in n. A simple counting argument then shows that for a random bow tie, the min-entropy of x 0 &#8853; x 1 is close to n. This means that x 0 &#8853; x 1 is close to the uniform distribution in the sense that any event occurring with probability p under the uniform distribution occurs with probability p &#8226; n O(1) under the distribution of x 0 &#8853; x 1 . Thus we can finally apply a Chernoff bound to deduce that with all but 2 -&#8486;(n) probability, x 0 &#8853; x 1 has Hamming weight at least n/3.</p><p>In other words, a bow tie sampled uniformly at random differs in at least a 1 3 fraction of coordinates. By the main property of bow ties, this implies that the corresponding bow tie distribution is hard on a 1 3 fraction of coordinates (indeed, the same set of coordinates).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Handling General Events</head><p>For general (product) events E = E 1 &#215; E 2 &#215; E 3 (where the sets {E i } need not have small Fourier coefficients), we can partition the universe</p><p>into parts &#960; such that for most of the parts &#960;, the event E restricted to &#960; has the structure that we already analyzed. For this to make sense, we ensure several properties of the partition. First, &#960; should be a product set (&#960; = &#960; 1 &#215; &#960; 2 &#215; &#960; 3 ) so that E &#8745; &#960; is a product set as well, i.e. E &#8745; &#960; has the form &#7868;1 &#215; &#7868;2 &#215; &#7868;3 . Second, each &#960; i should be an affine subspace of F n 2 so that we can do Fourier analysis with respect to this subspace. Finally &#960; 1 , &#960; 2 , and &#960; 3 should all be affine shifts of the same linear subspace so that the set {(x, y, z) &#8712; &#960; : x + y + z = 0} has the same Fourieranalytic structure as the parallel repeated GHZ query set {(x, y, z) &#8712; (F n &#8242; 2 ) 3 : x + y + z = 0} for some n &#8242; &lt; n.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>62:6 Parallel Repetition for the GHZ Game: A Simpler Proof</head><p>We prove the existence of such a partition with n &#8242; not too small (n &#8242; = n -o(n)) by a simple iterative approach, which is similar to <ref type="bibr">[18]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Comparison to [18]</head><p>Our proof has some similarity to <ref type="bibr">[18]</ref> -in particular, both proofs partition (F n 2 ) 3 into subspaces according to Fourier-analytic criteria and analyze these subspaces separately -but the resemblance ends there. In fact, there are fundamental high-level differences between the two proofs.</p><p>The biggest qualitative difference is that our high-level approach decomposes any conditional distribution P|E into components (bow tie distributions) for which many coordinates are hard. <ref type="bibr">[18]</ref> takes an analogous approach, but it establishes a weaker result that differs in the order of quantifiers: it first fixes a strategy f , and then decomposes P|E into components such that f performs poorly on many coordinates of many components. This difference is due to the fact that <ref type="bibr">[18]</ref> uses uniform distributions on high-dimensional affine spaces as their basic "hard" distributions. It is not in general possible to express P|E as a convex combination of such distributions (for example if each E j is a uniformly random subset of</p><p>2 ). Instead, <ref type="bibr">[18]</ref> expresses P|E as a convex combination of "pseudo-affine" distributions. This significantly complicates their proof, and we avoid this complication entirely by our use of bow tie distributions, which are novel to this work.</p><p>The remainder of our proof (the analysis of hardness within each part of the partition) is entirely different.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Notation &amp; Preliminaries</head><p>A significant portion of these preliminaries is taken verbatim from <ref type="bibr">[18]</ref>. We write exp(t) to denote e t for t &#8712; R.</p><p>Let n &#8712; N. For a vector v &#8712; R n and i &#8712; [n], we write v(i) or v i to denote the i-th coordinate of v. For p &#8712; N, we write &#8741;v&#8741;</p><p>to denote the &#8467; p norm of v.</p><p>For z &#8712; {0, 1} * , hwt(z) def = &#8741;z&#8741; 1 denotes the Hamming weight of z. We crucially rely on the Cauchy-Schwarz inequality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Set Theory</head><p>Let &#8486; be a universe. By a partition of &#8486;, we mean a collection of pairwise disjoint subsets of &#8486;, whose union equals &#8486;. If &#928; is a partition of &#8486; and &#969; is an element of &#8486;, we will write &#928;(&#969;) to denote the (unique) element of &#928; that contains &#969;. Thus, we can view &#928; as a function &#928; : &#8486; &#8594; 2 &#8486; . For a set S &#8838; &#8486;, we identify S with its indicator function S : &#8486; &#8594; {0, 1} defined at &#969; &#8712; &#8486; by</p><p>For sets S, T &#8838; &#8486; such that T &#824; = &#8709;, we use S| T &#8838; T to denote the set S &#8745; T when viewed as a subset of T . In particular, S| T is an indicator function from T to {0, 1}.</p><p>U. Girish, J. Holmgren, K. Mittal, R. Raz, and W. Zhan 62:7</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Probability Theory Probability Distributions</head><p>Let P be a distribution over a universe &#8486;. We sometimes think of P as a vector in R |&#8486;| whose value in coordinate &#969; &#8712; &#8486; is P (&#969;). In particular, we use &#8741;P -Q&#8741; 1 to denote the &#8467; 1 norm of the vector P -Q &#8712; R |&#8486;| , where P and Q are probability distributions. We use &#969; &#8764; P to denote a random element &#969; distributed according to P . We use supp(P ) = {&#969; &#8712; &#8486; : P (&#969;) &gt; 0} to denote the support of the distribution P .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Random Variables</head><p>Let &#931; be any alphabet. We say that X : &#8486; &#8594; &#931; is a &#931;-valued random variable. If &#931; = R, we say that the random variable is real-valued. If X is a real-valued random variable, the expectation of X under P is denoted E &#969;&#8764;P [X(&#969;)]. Often, the underlying distribution P is implicit, in which case we simply use E[X]. If X is a &#931;-valued random variable and P is a probability distribution, we write P X or X(P ) to denote the induced probability distribution of X under P , i.e., P X (&#963;) = (X(P ))(&#963;) def = P (X = &#963;) for all &#963; &#8712; &#931;. In particular, we say that X is distributed according to P X and we use &#963; &#8764; X(P ) to denote a random variable &#963; distributed according to P X . The distribution P is often implicit, and we identify X with the underlying distribution P X .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Events</head><p>We refer to subsets of &#8486; as events. We use standard shorthand for denoting events. For instance, if X is a &#931;-valued random variable and x &#8712; &#931;, we write X = x to denote the event {&#969; &#8712; &#8486; : X(&#969;) = x}. Similarly, for a subset F &#8838; &#931;, we write X &#8712; F to denote the event {&#969; &#8712; &#8486; : X(&#969;) &#8712; F }. We use P (E) to denote the probability of E under P . When P is implicit, we use the notation Pr(E) to denote P (E).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conditional Probabilities</head><p>Let E &#8838; &#8486; be an event with P (E) &gt; 0. Then the conditional distribution of P given E is denoted (P |E) : &#8486; &#8594; R and is defined to be</p><p>If E is an event, we write P X|E as shorthand for (P |E) X .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Measure under Uniform Distribution</head><p>For any set S &#8838; &#8486;, we sometimes identify S with the uniform distribution over S. In particular, we use x &#8764; S to denote x sampled according to the uniform distribution on S.</p><p>For S, &#960; &#8838; &#8486; such that &#960; &#824; = &#8709;, we use &#181; &#960; (S) = |S&#8745;&#960;| |&#960;| to denote the measure of S under the uniform distribution over &#960;. When &#960; = &#8486;, we omit the subscript and simply use &#181;(S).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Fourier Analysis Fourier Analysis over Subspaces</head><p>For any (finite) vector space V over F 2 , the character group of V, denoted V, is the set of group homomorphisms mapping V (viewed as an additive group) to {-1, 1} (viewed as a</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>62:8</head><p>Parallel Repetition for the GHZ Game: A Simpler Proof multiplicative group). Each such homomorphism is called a character of V. For functions mapping V &#8594; R, we define the inner product</p><p>The character group of V forms an orthonormal basis under this inner product. We refer to the all-ones functions &#967; : V &#8594; {-1, 1}, &#967; &#8801; 1 as the trivial character or the zero character and denote this by &#967; = &#8709;.</p><p>For all characters &#967; &#824; = &#8709;, since &#10216;&#967;, &#8709;&#10217; = 0, we have</p><p>, where we identify S with its indicator function S : V &#8594; {0, 1} as mentioned before. For</p><p>. &#9654; Fact 3. Given a choice of basis for V, there is a canonical isomorphism between V and V. Specifically, if V = F n 2 , then the characters of V are the functions of the form</p><p>&#9654; Definition 4. For any function f : V &#8594; R, its Fourier transform is the function</p><p>Since the characters of V are orthonormal and V is finite, we can deduce that f is equal to</p><p>An important special case of Plancherel's theorem is Parseval's theorem:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fourier Analysis over Affine Subspaces</head><p>Fix any subspace V &#8838; F n 2 and a vector a &#8712; F n 2 . Let U = a + V denote the affine subspace obtained by shifting V by a. For every function f : V &#8594; R, we associate it with a function f a : U &#8594; R defined by f a (x) = f (x + a) for all x &#8712; U. This is a bijective correspondence between the set of functions from U to R and the set of functions from V to R. Under this association, we can identify &#967; &#8712; V with &#967; a : U &#8594; {-1, 1} where &#967; a (x) = &#967;(x + a) for all x &#8712; U. This defines an orthonormal basis U a := {&#967; a : U &#8594; {-1, 1} | &#967; &#8712; V} for the vector space of functions from U to R. We call this the Fourier basis for U with respect to a. This basis depends on the choice of the shift a &#8712; U. However, for all possible shifts b &#8712; U and character functions &#967; &#8712; V, the functions &#967; a and &#967; b only differ by a sign. To see this, observe that</p><p>We will sometimes ignore the subscript and simply use &#967; &#8712; V to index functions in the Fourier basis of U. This is particularly the case when the properties we are dealing are independent of choice of basis (for example, the absolute values of Fourier coefficients of a function).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Multi-Player Games</head><p>In parallel repetition we often work with Cartesian product sets of the form (X</p><p>For these sets, we will use subscripts to index the inner product and superscripts to index the outer product. That is, for X = X 1 &#215; . . . &#215; X k we view elements x of X n as tuples (x 1 , . . . , x k ), where x i &#8712; X n i . We use x j i or x i (j) to refer to the j th coordinate of x i . We use x j to denote the vector (x j 1 , . . . , x j k ).</p><p>Q is a probability measure on X , and W : X &#215; Y &#8594; {0, 1} is a "winning" predicate. We refer to Q as the query distribution or the input distribution of the game.</p><p>The most important quantity associated with a game is the maximum probability with which the game can be "won".</p><p>&#9654; Definition 9. The value of a k-player game G = (X , Y, Q, W ), denoted val(G), is the maximum, over all deterministic strategies f , of val(G, f ).</p><p>It is often easier to construct probabilistic strategies for a game, i.e. strategies in which players may use shared and/or individual randomness in computing their answers.</p><p>&#9654; Definition 10 (Probabilistic Strategies). Let G = (X , Y, Q, W ) be a k-player game. A probablistic strategy for G is a distribution F of deterministic strategies for G. The success probability of F in G is denoted and defined as</p><p>A standard averaging argument implies that for every game, probabilistic strategies cannot achieve better success probability than deterministic strategies: &#9654; Fact 11. Replacing "deterministic strategies" by "probabilistic strategies" in Definition 9 yields an equivalent definition.</p><p>The main operation on multi-player games that we consider in this paper is parallel repetition:</p><p>&#9654; Definition 12 (Parallel Repetition). Given a k-player game G = (X , Y, Q, W ), its n-fold parallel repetition, denoted G n , is defined as the k-player game (X n , Y n , Q n , W n ), where W n (x, y) def = n j=1 W (x j , y j ). For x &#8712; X n , we refer to x i &#8712; X n i as the input to the i-th player.</p><p>To bound the value of parallel repeated games, it is helpful to analyze the probability of winning in a particular instance of the game under various modified query distributions.</p><p>) is a game (with a product winning predicate), the value of G in the j th coordinate for j &#8712; [n], denoted val (j) (G), is the value of the game (X , Y, Q, W &#8242; ), where W &#8242; (x, y) = W (x j , y j ).</p><p>&#9654; Definition 14 (Game with Modified Query Distribution). Let G = (X , Y, Q, W ) be a game. For a probability measure P on X , we write G|P to denote the game (X , Y, P, W ). For an event E on X , we write G|E to denote the game (X , Y, Q E , W ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5">GHZ Distribution</head><p>Let Q denote the uniform distribution over {(0, 0, 0), (0, 1, 1), (1, 0, 1), (1, 1, 0)}. Define W : X &#215; Y &#8594; {0, 1} at x &#8712; X , y &#8712; Y by W (x, y) = 1 if and only if x 1 &#8744; x 2 &#8744; x 3 = y 1 + y 2 + y 3 (mod 2). The GHZ game refers to the 3-player game (X , Y, Q, W ), which has value 3/4. The n-fold repeated GHZ game refers to the n-fold parallel repetition of (X , Y, Q, W ). Our parallel repetition results easily generalize with any other (constant-sized) answer alphabet Y &#8242; and any predicate W &#8242; , as long as the game (X , Y &#8242; , Q, W &#8242; ) has value less than 1.</p><p>We typically use X = (X 1 , X 2 , X 3 ) &#8712; X n to denote a random variable distributed according to Q n where X i &#8712; X n i denotes the input to the i-th player.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Partitioning into Pseudorandom Subspaces</head><p>We make use of the notion of affine partition similar to the one defined in <ref type="bibr">[18]</ref>. We say that &#928; is an affine partition of (F n 2 ) 3 of codimension at most d if &#928; is a partition on (F n 2 ) 3 and: Each part &#960; &#8712; &#928; has the form a &#960; + V 3 &#960; where V &#960; is a subspace of F n 2 and a &#960; &#8712; (F n 2 ) 3 , and Each V &#960; has codimension at most d. The main take-away from this section is Proposition 15, which states the following: Given the query distribution to the n-fold GHZ game, and a product event E &#8838; (F n 2 ) 3 with large enough probability mass, we can find an affine partition &#928; of (F n</p><p>2 ) 3 such that on a typical part &#960; &#8712; &#928;, the non-zero Fourier coefficients of the indicator functions</p><p>Formally, the proposition is as follows: 3 be such that P(E) = &#945;. For all &#948; &gt; 0, there exists an affine partition &#928; of (F n 2 ) 3 of codimension at most 3 &#948; 3 such that the following holds. With probability at least 1 -&#948; &#945; over &#960; &#8764; &#928;(P|E), for all i &#8712; [3] and non-zero &#967; &#8712; V, we have</p><p>Recall that &#928;(P|E) is the distribution induced by sampling x &#8764; P|E and outputting the part of &#928; to which x belongs. Note that in the statement of the proposition, we don't specify a choice of Fourier basis for &#960; i . This is because for any set S &#8838; &#960; i , the quantity S(&#967; ai ) is independent of choice of a i &#8712; &#960; i so we simply write S(&#967;) . The proof of Proposition 15 is similar in nature to the proof of Lemma 6.2 in <ref type="bibr">[18]</ref>, but is much simpler and is presented in the full version of the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Key Fourier Analytic Lemmas</head><p>We crucially make use of the following lemma, the proof of which can be found in the full version of the paper.</p><p>If furthermore for all non-zero &#967; &#8712; V, we have B(&#967;) &#8804; &#948; 2 , then</p><p>Recall from Section 2.2 that &#181; &#960;i (S) &#8796; |S&#8745;&#960;i| |&#960;i| . In the statement of this lemma, we don't specify a choice of Fourier basis for &#960; 2 and &#960; 3 . Since the properties C(&#967; a3 ) &#8804; &#948; 1 and B(&#967; a2 ) &#8804; 2 are independent of the choice of a 2 and a 3 , we simply write C(&#967;) &#8804; &#948; 1 and</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Main Proof</head><p>We use the following Parallel Repetition Criterion which is similar to, but weaker than the one from <ref type="bibr">[18]</ref> for the GHZ game and has a slightly simpler proof. Let G refer to the n-fold parallel repetition of the GHZ game. Let P = Q n .</p><p>&#9654; Lemma 17 (Parallel Repetition Criterion). Let c &#8712; (0, 1] be a constant and &#961;(n) : N &#8594; R be a function such that &#961;(n) &#8805; exp(-n). Suppose for all large n &#8712; N and all subsets</p><p>This lemma is proved in <ref type="bibr">[18]</ref> under the weaker assumption that there is some coordinate i &#8712; [n] for which val (i) (G|E) &#8804; 1 -c. The proof is slightly simpler under our stronger assumption that E i&#8764;[n] val (i) (G|E) &#8804; 1 -c. We prove this in Appendix A.1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Given this criterion, our goal of showing an inverse polynomial bound for val(G) reduces to showing the following. Let</head><p>and n be large enough. It suffices to show that E i&#8764;[n] val (i) (G|E) &#8804; 0.95. We do this as follows.</p><p>Let &#948; = &#945; 20 n 1/40 . Proposition 15 implies the existence of a partition &#928; of (F n 2 ) 3 into affine subspaces of codimension at most</p><p>Every &#960; &#8712; &#928; is of the form a + V 3 where V &#8838; F n 2 is a subspace and a &#8712; (F n 2 ) 3 . With probability at least 1 -&#948; P(E) &#8805; 1 -o(1) over &#960; &#8764; &#928;(P|E), we have E i | &#960;i (&#967;) &#8804; &#948; for all i &#8712; [3] and non-zero &#967; &#8712; V, where V is the subspace of F n 2 for which &#960; is an affine shift of V 3 .</p><p>Under the distribution &#928;(P|E), the probability that &#960; is sampled equals (P|&#960;)(E)&#8226;P(&#960;) P(E) by Bayes' rule. This implies that the probability that &#960; &#8764; &#928;(P|E) satisfies (P|&#960;)(E) &#8804; P(E)/10 is at most 1/10. We will focus on &#960; = &#960; 1 &#215; &#960; 2 &#215; &#960; 3 that satisfy both these properties, namely, the measure of E under P|&#960; is significant, furthermore, for all i &#8712; [3], all non-zero Fourier coefficients of the sets E i restricted to &#960; i are small. &#9654; Definition 18. We say that &#960; is good if (P|&#960;)(E) &#8805; &#945;/10, and for all non-zero &#967; &#8712; V and i &#8712; [3], we have</p><p>By a union bound, a random &#960; &#8764; &#928;(P|E) will be good with probability at least 1 -1 10 -&#948; &#945; . Fix any such good &#960; = &#960; 1 &#215; &#960; 2 &#215; &#960; 3 &#8712; &#928;, and let V be the subspace such that &#960; is an affine shift of V 3 .</p><p>For all z &#8712; E 3 &#8745; &#960; 3 , define a (partial) matching M z between &#960; 1 and &#960; 2 as follows. For</p><p>be the bipartite graph between &#960; 1 and &#960; 2 obtained by combining edges from the matchings for z &#8712; E 3 &#8745;&#960; 3 . Let E(G) denote the set of edges in G. For every edge e &#8712; E(G), we can identify e with a valid input to the n-fold GHZ game that is contained in E &#8745; &#960;. Namely, we associate (x 0 , y 0 ) &#8712; E(G) to the input (x 0 , y 0 , x 0 + y 0 ) &#8712; supp(P) &#8745; E &#8745; &#960;. This is a bijective correspondence because of the way we defined the graph G. Under this correspondence, the uniform distribution over edges of G corresponds to the distribution P|E, &#960;. We now introduce the important notion of a bow tie.</p><p>&#9654; Definition 19 (Bow Tie). We say that a subset of edges b &#8838; E(G) is a bow tie if b = {x 0 , x 1 } &#215; {y 0 , y 1 } for some x 0 &#824; = x 1 &#8712; &#960; 1 , y 0 &#824; = y 1 &#8712; &#960; 2 such that x 0 + y 0 = x 1 + y 1 (or equivalently x 0 + y 1 = x 1 + y 0 ). Alternatively, for z 0 = x 0 + y 0 and z 1 = x 0 + y 1 , we have (x i , y j , z k ) &#8712; supp(P) for all (i, j, k) &#8712; supp(Q).</p><p>Let b = {x 0 , x 1 } &#215; {y 0 , y 1 } be a bow tie. As before, we identify b with the indicator vector b &#8712; {0, 1} E(G) of the edges of b, that is, b(e) = 1 iff e &#8712; {(x i , y j ) : i, j &#8712; {0, 1}}. We use b to denote the uniform distribution on the edges of the bow tie, when viewed as inputs to the n-fold GHZ game. More precisely, b denotes the uniform distribution on {(x i , y j , x i + y j ) | i, j &#8712; {0, 1}}.</p><p>We say that b differs in the i-th</p><p>Let b be a bow tie and I &#8838; [n] be the coordinates on which b differs. The following claim shows that val (i) (G| b) &#8804; 3/4 for all i &#8712; I. The proof is deferred to Appendix A.2 &#9655; Claim 20. Let b = {x 0 , x 1 }&#215;{y 0 , y 1 } be a bow tie. Let I &#8838; [n] be the subset of coordinates on which b differs. Then, val (i) (G| b) &#8804; 3/4 for all i &#8712; I.</p><p>Let B denote the set of all bow ties. Consider the distribution on edges defined by first sampling a uniformly random bow tie from B, and then a uniformly random edge from the bow tie. We now provide an alternate description of this distribution. For each z &#8712; E 3 &#8745; &#960; 3 , define 1 z &#8712; {0, 1} |E(G)| as follows. For each e = (x, y) &#8712; E(G), define 1 z (e) = 1 if x and y are both matched in M z but not to each other, and define 1 z (e) = 0 otherwise. Alternatively, 1 z is the indicator of the set ((</p><p>each of which have non-negative values, so v induces a distribution on E(G). Consider this distribution &#7805; = v &#8741;v&#8741;1 on E(G) defined by normalizing v. We show that this distribution is an alternate description of the aforementioned distribution.</p><p>In particular, we can think of the distribution &#7805; := v &#8741;v&#8741;1 on E(G) as obtained by sampling a uniformly random bow tie b in G and outputting a uniformly random edge of b.</p><p>The proof of this is deferred to Appendix A.3. Our goal now is to show that the distribution &#7805; is close to the uniform distribution over edges of G. To do so, we study some</p><p>the set &#960; &#8745; supp(P) is non-empty, therefore, we may choose a &#8712; supp(P) so that &#960; = a + V 3 . This, along with Equation (3) implies that the first hypothesis of Lemma 16 is satisfied. Lemma 16 implies that</p><p>We make use of the following bounds on the &#8467; 1 and &#8467; 2 norms of v. The proofs of these are by Fourier analysis and are deferred to Appendices A.4 and A.5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>&#9655; Claim 22</head><p>.</p><p>&#9655; Claim 23.</p><p>We now bound &#8741;&#7805;&#8741; 2 = &#8741;v&#8741;2 &#8741;v&#8741;1 by plugging in appropriate bounds on &#948; and dividing Equation (6) by Equation <ref type="bibr">(5)</ref>. Our choice of &#948; = &#945; 20 /n 1/40 , and our assumption that &#945;/10 &#8804; (P|&#960;)(E) (which in turn is at most min i&#8712; <ref type="bibr">[3]</ref> (&#181; &#960;i (E i ))) implies that &#948; is much smaller than any &#181; &#960;i (E i ). In particular, we highlight that</p><p>Thus the dominant term on the right-hand side of Equation ( <ref type="formula">5</ref>) is</p><p>, and the dominant term on the right-hand side of Equation ( <ref type="formula">6</ref>) is</p><p>). More precisely, we have</p><p>This implies that</p><p>In comparison, Equation (4) gave that</p><p>Thus we can rewrite Equation (9) as</p><p>This, together with the fact that by construction &#8741;&#7805;&#8741; 1 = 1, is sufficient to deduce that &#7805; is close to the "uniform distribution" vector</p><p>More formally, we have: &#9654; Fact 24. Suppose that &#7805; &#8712; R m is an m-dimensional vector such that &#8741;&#7805;&#8741; 1 = 1, and</p><p>where &#361; denotes the vector (<ref type="foot">foot_0</ref> m , . . . , 1 m ).</p><p>The proof of Fact 24 is deferred to Appendix A.6 Applying Fact 24 to Equation <ref type="bibr">(10)</ref> shows that d TV (&#7805;, &#361;) = o(1). In other words, a uniformly random edge of a uniformly random bow tie is distributed close to uniformly on E(G).</p><p>We now show that a typical bow tie differs in a considerable fraction of coordinates. &#960; &#8764; &#928;(P|E) is good with probability at least 1 -&#948; &#8226; &#945; -1 -1/10 &#8805; 0.9 -o(1) &#8805; 0.8, we have Let W i denote the event of winning the GHZ game in the j i -th coordinate under the strategy f and let</p><p>We show how to construct a sequence of coordinates so that every term in the above product is at most 1 -c/2. This would imply that val(G) &#8804; (1 -c/2) &#920;(log(1/&#961;(n)) = &#961;(n) &#8486;(1) . Fix any i &#8712; {0, . . . , m -1} and assume that we have found j 1 , . . . , j i . Let X &#8764; P and X &#8804;i denote X restricted to the coordinates {j 1 , . . . , j i }. Let Y &#8804;i denote the outputs of the players restricted to the coordinates {j 1 , . . . , j i }. Let Z &#8804;i = (X &#8804;i , Y &#8804;i ). Since W &#8804;i is a function of Z &#8804;i , we have</p><p>Let F = F (z &#8804;i ) denote the event that</p><p>where N = 32 i &#8805; supp(Z &#8804;i ). We argue that F occurs with probability at least 1 -c/2. This is because we are sampling z &#8804;i with probability P[Z &#8804;i = z &#8804;i |W &#8804;i ], hence the measure of z &#8804;i for which</p><p>Fix any z &#8804;i such that F holds. Our choice of m implies that 1 N &#8226; c 2 &#8805; &#961;(n). Note that we can express the distribution P|Z &#8804;i = z &#8804;i as P|E where</p><p>By linearity of expectation, we can fix a j &#8712;</p><p>Note that j / &#8712; {j 1 , . . . , j i } since we already win the game on these coordinates. This, along with Equation (11) completes the proof. &#9664;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.2 Proof of Claim 20</head><p>Proof of Claim 20. Let i &#8712; I. Since the bow tie b differs in the i-th coordinate, we have</p><p>We may thus assume without loss of generality that x 0 (i) = y 0 (i) = 0. Define embeddings &#981; 1 : F 2 &#8594; {x 0 , x 1 }, &#981; 2 : F 2 &#8594; {y 0 , y 1 } and &#981; 3 : F 2 &#8594; {z 0 , z 1 } at a &#8712; F 2 by &#981; 1 (a) = x a , &#981; 2 (a) = y a and &#981; 3 (a) = z a . It follows for all a &#8712; {0, 1} and j &#8712; [3], we have (&#981; j (a))(i) = a. In particular, for &#981; = &#981; 1 &#215; &#981; 2 &#215; &#981; 3 , the distribution &#981;(Q) is exactly the distribution b. Given any strategies f1 , f2 , f3 : F n 2 &#8594; F n 2 for the players for the n-fold GHZ game restricted to the query distribution b, the functions &#981; 1 , &#981; 2 , &#981; 3 induce a strategy for the GHZ game as follows. Define f j : F 2 &#8594; F 2 by f j (a) = ( fj (&#981; j (a)))(i). The success probability of the strategy f 1 &#215; f 2 &#215; f 3 on the distribution Q is exactly the success probability in the i-th coordinate of the strategy f1 &#215; f2 &#215; f2 on the distribution b. It follows that val (i) (G| b) &#8804; 3/4. &#9665;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.3 Proof of Claim 21</head><p>Proof of Claim 21. Fix any e &#8712; E(G), e = (x 0 , y 0 ). This implies that</p><p>where</p><p>This implies that for all e = (x 0 , y 0 ) &#8712; E(G) and z 1 &#8712; E 3 &#8745; &#960; 3 , we have 1 z1 (e) = 1 if and only if b = {x 0 , x 1 } &#215; {y 0 , y 1 } is a bow tie. Observe that as we vary z 1 &#8712; E 3 &#8745; &#960; 3 , we obtain all possible bow ties that contain the edge e, i.e. the bow ties b for which b(e) &#824; = 0. This implies that v</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.4 Proof of Claim 22</head><p>For ease of notation, we define weight functions as follows.</p><p>&#9654; Definition 26 (Weight functions).</p><p>The first hypothesis of Lemma 16 is satisfied due to Equation (3). Lemma 16 implies that</p><p>All the hypothesis are satisfied due to Equation (3). Lemma 16, along with conditioning z &#8764; &#960; 3 on z &#8712; E 3 implies that</p><p>62:18 Parallel Repetition for the GHZ Game: A Simpler Proof</p><p>Substituting this in the previous inequalities and taking an expectation over z &#8764; E 3 &#8745; &#960; 3 ,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.5 Proof of Claim 23</head><p>Proof of Claim 23. Define wt &#960; (&#8226;) as in the proof of Claim 22. Let z, z &#8242; &#8712; E 3 &#8745; &#960; 3 . Observe that &#10216;1 z ,</p><p>We apply Lemma 16 with parameters</p><p>The first hypothesis is satisfied due to Equation (3). Lemma 16 implies that</p><p>Taking an expectation over z &#8242; &#8764; E 3 &#8745; &#960; 3 and applying Cauchy-Schwartz yields that</p><p>Observe that &#181; &#960;1 (L z &#8745; L z &#8242; ) = E x&#8764;&#960;1 [L z (x)E 2 (x + z &#8242; )] for all z &#8242; &#8712; E 3 &#8745; &#960; 3 . We now apply Lemma 16 with parameters A = L z &#8745; &#960; 1 , B = E 2 &#8745; &#960; 2 , C = E 3 &#8745; &#960; 3 . All the hypotheses are satisfied due to Equation (3). Lemma 16, along with the aforementioned observation implies that</p><p>An analogous inequality holds for |R z &#8745; R z &#8242; |. Substituting this in the previous inequality and using the fact that &#8730; a + b &#8804; &#8730; a + &#8730; b, we have</p><p>We now take an expectation over z &#8764; E 3 &#8745; &#960; 3 and use Equation <ref type="bibr">(12)</ref> to conclude that &#8741;&#7805; -&#361;&#8741; 2 2 = &#10216;&#7805; -&#361;, &#7805; -&#361;&#10217; = &#8741;&#7805;&#8741; 2 2 + &#8741;&#361;&#8741; 2 2 -2&#10216;&#361;, &#7805;&#10217;</p><p>Finally, we bound the &#8467; 1 distance in terms of the &#8467; 2 distance: &#8741;&#7805; -&#361;&#8741; 1 &#8804; &#8741;&#7805; -&#361;&#8741; 2 &#8226; &#8730; m &#8804; 3&#946;. &#9664;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.7 Proof of Claim 25</head><p>Proof of Claim 25. It suffices to show that a random b &#8764; B differs in less than n/3 coordinates with probability at most 2 -&#8486;(n) = o(1).</p><p>The Chernoff bound implies that Pr x0,x1&#8764;F n 2 [hwt(x 0 + x 1 ) &lt; n/3] &#8804; 2 -&#8486;(n) . We condition on x 0 , x 1 &#8712; &#960; 1 to conclude that Pr x0,x1&#8764;&#960;1 [hwt(x 0 + x 1 ) &lt; n/3] &#8804; 2 -&#8486;(n) &#8226; 2 2n |V| 2 . Let b = {x 0 , x 1 } &#215; {y 0 , y 1 } be a bow tie. By definition, we have y 1 = x 0 + x 1 + y 0 . In particular, the bow tie b is uniquely identified by x 0 , x 1 , y 0 . This implies that the probability that a random b &#8764; B differs in less than n/3 coordinates is precisely [hwt(x 0 + x 1 ) &lt; n/3]</p><p>Recall that v = E z&#8764;E3&#8745;&#960;3 [1 z ] = 1 &#181;&#960; 3 (E3)&#8226;|V| z&#8712;E3&#8745;&#960;3 1 z , where for each e, z&#8712;E3&#8745;&#960;3 1 z (e) equals the number of bow ties containing the edge e. Since each bow tie contains 4 edges, we have that &#8741;v&#8741; 1 = 4 &#181;&#960; 3 (E3)&#8226;|V| &#8226; |B|. Then, equation <ref type="bibr">(7)</ref> implies that</p><p>This implies that |V| 3 |B| &#8804; 8/&#945; 6 . Recall that &#945; &#8805; n -O(1) and the co-dimension of V is o(n). This implies that 2 2n |V| 2 = 2 o(n) . This along with the above calculation implies that the probability that a uniformly random b &#8764; B differs in less than n/3 coordinates is at most</p><p>&#8226; 2 o(n) = 2 -&#8486;(n) . This completes the proof. &#9665;</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0"><p>Noga Alon and Bo'az Klartag. Economical toric spines via Cheeger's inequality. J. Topol. Anal., 1(2):101-111,</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2009" xml:id="foot_1"><p>2 Boaz Barak, Mark Braverman, Xi Chen, and Anup Rao. How to compress interactive communication. SIAM J. Comput., 42(3):1327-1363, 2013. (also in STOC 2010).</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="23" xml:id="foot_2"><p>Ran Raz. A parallel repetition theorem. SIAM J. Comput., 27(3):763-803, 1998. (also in STOC 1995).</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="24" xml:id="foot_3"><p>Ran Raz. Parallel repetition of two prover games. In CCC, pages 3-6. 2010.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="25" xml:id="foot_4"><p>Ran Raz. A counterexample to strong parallel repetition. SIAM J. Comput., 40(3):771-777, 2011.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="26" xml:id="foot_5"><p>Oleg Verbitsky. Towards the parallel repetition conjecture. In CCC, pages 304-307. IEEE Computer Society, 1994.</p></note>
		</body>
		</text>
</TEI>
