<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Ramsey, Paper, Scissors</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>12/01/2020</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10263908</idno>
					<idno type="doi">10.1002/rsa.20950</idno>
					<title level='j'>Random Structures &amp; Algorithms</title>
<idno>1042-9832</idno>
<biblScope unit="volume">57</biblScope>
<biblScope unit="issue">4</biblScope>					

					<author>Jacob Fox</author><author>Xiaoyu He</author><author>Yuval Wigderson</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[We introduce a graph Ramsey game called Ramsey, Paper, Scissors. This game has two players, Proposer and Decider. Starting from an empty graph on n vertices, on each turn Proposer proposes a potential edge and Decider simultaneously decides (without knowing Proposer's choice) whether to add it to the graph. Proposer cannot propose an edge which would create a triangle in the graph. The game ends when Proposer has no legal moves remaining, and Proposer wins if the final graph has independence number at least s. We prove a threshold phenomenon exists for this game by exhibiting randomized strategies for both players that are optimal up to constants. Namely, there exist constants 0 < A < B such that (under optimal play) Proposer wins with high probability if s < A √ n log n, while Decider wins with high probability if s > B √ n log n. This is a factor of Θ( √ log n) larger than the lower bound coming from the off-diagonal Ramsey number r(3, s).]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Ramsey's theorem states for every m, s &#8805; 3, there exists a least positive integer r(m, s) for which every graph on r(m, s) vertices has a clique of order m or an independent set of order s. The study of the Ramsey numbers r(m, s) and their many variations has a long history and holds a central place in extremal combinatorics.</p><p>One notable problem is the study of the asymptotic growth of r <ref type="bibr">(3, s)</ref>. The classical argument of Erd&#337;s and Szekeres <ref type="bibr">[7]</ref> yields an upper bound of r(3, s) &#8804; s+1 2 , while an involved probabilistic construction of Erd&#337;s <ref type="bibr">[5]</ref> shows that r(3, s) = &#8486;(s 2 / log 2 s). Spencer <ref type="bibr">[15]</ref> later used the Lov&#225;sz Local Lemma to give a simpler proof of this same lower bound.</p><p>The upper bound was then improved to O(s 2 / log s) by Ajtai, Koml&#243;s, and Szemer&#233;di <ref type="bibr">[1]</ref> and later Shearer <ref type="bibr">[14]</ref> improved the constant factor. Finally, the log s gap was closed by Kim <ref type="bibr">[12]</ref>, who proved that r(3, s) = &#920;(s 2 / log s). Subsequently, Bohman <ref type="bibr">[2]</ref> used the so-called triangle-free process to reprove Kim's lower bound, and further improvements <ref type="bibr">[3,</ref><ref type="bibr">8]</ref> to this analysis have determined the asymptotics of r(3, s) up to a factor of 4 + o <ref type="bibr">(1)</ref>. For more details on these results, see the survey of Spencer <ref type="bibr">[16]</ref> on the r(3, s) problem.</p><p>Many interesting questions in graph Ramsey theory concern the game theory of various graph-building and graph-coloring games, usually played between two players. The earliest example of a graph Ramsey game was studied by Erd&#337;s and Selfridge <ref type="bibr">[6]</ref>, who studied a class of games known as positional games; a prototypical positional game is the Maker-Breaker game on graphs, in which two players, Maker and Breaker, take turns claiming the edges of a complete graph K n , and Maker wins by building a graph with a prescribed property (such as containing a large clique). A more symmetric version of this game is usually just called the Ramsey game: given a fixed graph H, two players take turns claiming edges of a large complete graph until one player wins by building a copy of H. The mis&#232;re version of this game, in which the first player to build a copy of H loses, has also been studied. For an introduction to these topics, see the book of Hefetz, Krivelevich, Stojakovi&#263;, and Szab&#243; <ref type="bibr">[10]</ref>.</p><p>A game that more closely resembles the one we study in this paper is the so-called online Ramsey game, which starts instead on a large empty graph and involves two players: Builder, who builds an edge on each turn, and Painter, who then paints it red or blue. Builder's goal is to build a monochromatic clique of a certain order n using as few turns as possible. By Ramsey's theorem, Builder can always win by building the edges of a large complete graph, and Painter's goal is simply to delay this eventuality by as long as possible.</p><p>Ever since Erd&#337;s proved the lower bound r(s, s) &#8805; 2 s/2 on the classical Ramsey numbers using the probabilistic method, randomness has been ever-present in graph Ramsey theory. All known proofs of exponential lower bounds on r(s, s) use the probabilistic method. Using a careful analysis of random play, Conlon, Fox, He, and Grinshpun <ref type="bibr">[4]</ref> recently proved that the online Ramsey game takes at least 2 (2- &#8730; 2)s-O (1) turns, making an exponential improvement on the trivial bound of 2 s/2-1 in that setting.</p><p>In certain cases, randomness is even baked directly into the definition of the game itself. Friedgut, Kohayakawa, R&#246;dl, Ruci&#324;ski, and Tetali <ref type="bibr">[9]</ref> studied Ramsey games against a onearmed bandit: a variation of the online Ramsey game in which Builder builds a uniform random unbuilt edge on each turn from a fixed set of vertices and Painter must color these incoming edges red or blue while avoiding monochromatic triangles for as long as possible.</p><p>In this paper, we study another graph-building game we call Ramsey, Paper, Scissors, due to the simultaneous nature of the turns. The game is played between two players, Proposer and Decider, on a fixed set of n vertices, who jointly build a graph on these vertices one edge at a time. On each turn of the game, Proposer proposes a pair of vertices and Decider simultaneously decides whether to add this pair as an edge. Proposer cannot propose pairs that have been proposed before, nor pairs that would introduce a triangle to the graph if built. It is essential that Proposer and Decider make their moves simultaneously in each turn, so that Proposer doesn't know whether Decider intends to build the edge before proposing it, and Decider doesn't know which pair Proposer will propose. However, after both players make their choice, they each learn the other's choice (so Proposer learns whether the edge was added to the graph, and Decider learns which pair was proposed).</p><p>Ramsey, Paper, Scissors ends when Proposer has no legal moves, and Proposer wins if the independence number of the final graph is at least a predetermined value s. Note that if n &#8805; r(3, s), then Proposer always wins, since the final graph must have independence number at least s. Thus, Proposer wins if s &#8804; c &#8730; n log n for some constant c &gt; 0. On the other hand, Decider can always win if s = n by simply saying YES to the first pair Proposer proposes-this means that the final graph will have at least one edge, and thus must have independence number at most n -1. Finally, if s is between these two bounds (i.e. s &lt; n &lt; r(3, s)), then both players can win with positive probability via randomized strategies. Namely, Proposer can propose pairs randomly, and if Proposer gets lucky, the only edges Decider will say YES to will be the n -1 edges incident to a single vertex, thus forcing the final graph to be the star K 1,n-1 with independence number n -1. Of course, if Decider says YES to fewer than n -1 pairs then if Proposer is lucky, the independence number is still at least n -1.</p><p>For the other randomized strategy, Decider can fix a triangle-free graph H on n vertices with m edges and independence number s, and say YES exactly m times, determining these m times completely at random. If Decider gets lucky, then exactly a copy of H is built, thus achieving an independence number of s. However, both of these strategies will only succeed with extremely small probability, and it is therefore natural to ask for (randomized) strategies for both Proposer and Decider that succeed with high probability; as usual, we say that an event E happens with high probability (w.h.p.) if Pr(E) &#8594; 1 as n &#8594; &#8734;. Indeed, our main theorem states that such strategies exist for both players for s = &#920;( &#8730; n log n).</p><p>Theorem 1. If s &#8804; 1 1000 &#8730; n log n then Proposer can win w.h.p, while if s &#8805; 1000 &#8730; n log n, Decider can win w.h.p.</p><p>One surprising consequence of Theorem 1 is that the fully random strategy is not optimal for Proposer. Indeed, if Proposer were to play fully randomly (i.e. proposing a uniformly random pair among all remaining open pairs at each step), then Decider can choose to simply answer YES to every proposal. In that case, the game will simply follow the socalled triangle-free process, and Bohman <ref type="bibr">[2]</ref> proved that w.h.p., the triangle-free process produces a graph with independence number O( &#8730; n log n). Thus, if Proposer were to play fully randomly, Decider could respond with a strategy that saves a factor of &#8486;( &#8730; log n) from the optimum. Nevertheless, for Decider, a completely random strategy is optimal (up to a constant factor).</p><p>This paper is organized as follows. In the next section, we formally define the game and describe the Proposer and Decider strategies we will use to prove Theorem 1. In Section 3, we prove the upper bound in Theorem 1 by studying the random Decider strategy. This argument is motivated by an old proof of Erd&#337;s <ref type="bibr">[5]</ref> that the off-diagonal Ramsey numbers satisfy r(3, t) = &#8486;(t 2 /(log t) 2 ), and relies on a concentration lemma of Conlon, Fox, He, and Grinshpun <ref type="bibr">[4]</ref>. This kind of argument has been generalized to prove lower bounds on all off-diagonal Ramsey numbers by Krivelevich <ref type="bibr">[13]</ref>.</p><p>Finally, in Section 4, we prove the lower bound in Theorem 1. This last argument is the most delicate, and requires two main ingredients: an appropriate "semi-random" strategy for Proposer and the analysis of an auxiliary game, whose upshot is that Azuma's inequality remains close to true, even when an adversary is allowed to weakly affect the outcomes taken by a sequence of random variables.</p><p>In the concluding remarks, we mention some open problems and conjectures relating to this and other Ramsey games.</p><p>All logarithms are base 2 unless otherwise stated. For the sake of clarity of presentation, we systematically omit floor and ceiling signs whenever they are not crucial.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Background and Winning Strategies</head><p>Ramsey, Paper, Scissors is the following two-player graph-building game played on a fixed set V of n vertices. Each turn of the game yields a new graph G i = (V, E i ) on these vertices. The game is initialized with G 0 as the empty graph, and each G i will either be equal to G i-1 or will add a single new edge to G i-1 . Throughout the game G i is required to be triangle-free. In order to ensure this, call a pair {x, y} &#8712; V 2 \ E i closed in G i if there is some z &#8712; V such that {x, z}, {y, z} &#8712; E i ; this means that the pair {x, y} cannot be added as an edge without introducing a triangle to the graph. Let C i denote the set of closed pairs in</p><p>The game proceeds as follows. Initialize a "forbidden" set F 0 = &#8709;; in general, F i will contain all pairs that Proposer has proposed up to turn i. On turn i, Proposer chooses a pair {x, y} &#8712; O i \ F i . Decider does not learn what pair Proposer has chosen, but must decide to answer either YES or NO. If Decider answers YES, then we add the edge {x, y} to G i , so that E i+1 = E i &#8746; {xy}. Otherwise, if Decider answers NO, then E i+1 = E i . Finally, regardless of Decider's answer, the pair {x, y} is added to the forbidden set, namely F i+1 = F i &#8746; {xy}. Thus, Proposer can never propose the same pair twice. Both players can see the contents of F i , so at this point Decider learns what pair was proposed. The game ends when Proposer has no legal moves left, on the turn i where O i &#8838; F i . Proposer wins if the final graph thus produced contains an independent set of order s (for some parameter s), and Decider wins otherwise.</p><p>We now sketch the strategies for Proposer and Decider which we will use to prove the two parts of Theorem 1. Decider's strategy is simply to say YES on each turn with probability p = Cn -1/2 randomly and independently, for an appropriately chosen constant C &gt; 0. As a result, the final graph built will be a subgraph of the Erd&#337;s-R&#233;nyi random graph G(n, p), where some edges may have been removed to make it triangle-free. We will show in Section 3 that not too many such edges are removed from any set of a certain size, so the independence number at the end of the game will still be &#920;( &#8730; n log n). As mentioned above, Proposer's strategy cannot be purely random, for otherwise Decider could respond by saying YES on every single turn. The resulting graph process will be exactly the triangle-free process, which would result in a final graph with an independence number of &#920;( &#8730; n log n) w.h.p. <ref type="bibr">[2]</ref>. To achieve an independence number of &#920;( &#8730; n log n), Proposer will divide the game into m stages (which we call epochs), where m is extremely slowly growing (its growth is of order log * n), and Proposer's strategy is to force Decider to progressively ratchet up the fraction of YES answers in each epoch. Roughly, this is done as follows, though the precise strategy is slightly more involved. Proposer divides the vertex set into 2m sets U i , V i , where 1</p><p>. In epoch i, Proposer proposes all the pairs from U i to V j for j &#8805; i and all the open pairs within V i , mixing together these two kinds of pairs in a uniformly random order.</p><p>To see why Decider must answer YES more often with each epoch, note that in order to keep V i from containing a large independent set, Decider must answer YES with a certain density p. But then the number of edges built from U i to V i+1 will have roughly the same density p, which will then close a large fraction of the pairs in V i+1 . Thus, on the next epoch, Decider must answer YES much more frequently in order to reach the same final edge density in V i+1 . After enough epochs, this becomes impossible. Proposer's strategy is slightly different from the one sketched above to simplify the analysis; it is formally laid out in Section 4, as are the details of this heuristic argument.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Decider's winning strategy</head><p>In this section, we show that Decider's random strategy wins w.h.p. when Proposer must build an independent set of order at least 1000 &#8730; n log n.</p><p>Theorem 2. Decider has a randomized strategy such that w.h.p., no matter how Proposer plays, the final graph produced will contain no independent set of order at least 1000 &#8730; n log n.</p><p>We will require the following structural result about the Erd&#337;s-R&#233;nyi random graph (see <ref type="bibr">[4,</ref><ref type="bibr">Lemma 18]</ref>). It shows that for suitable p &#8776; n -1/2 , when edges are removed from G(n, p) until it is triangle-free, not many edges need to be removed from any given small subset. Lemma 3. Suppose t is sufficiently large, p = 20(log t)/t, n = 10 -6 t 2 /(log t) 2 , and G &#8764; G(n, p) is an Erd&#337;s-R&#233;nyi random graph. Then, w.h.p. there does not exist a set S &#8834; V (G) of order t such that more than t 2 10 pairs of vertices in S have a common neighbor outside S. In order to apply this lemma, we will need to understand the space of Proposer's strategies when Decider plays randomly. </p><p>That is, every edge of G not in H is the third edge of a triangle with the other two edges in H.</p><p>Proof of Theorem 2. Suppose n, t, p satisfy the conditions of Lemma 3. Decider's strategy is simply to answer YES with probability p on every turn. Unless a pair {x, y} is closed, Proposer must choose it eventually, in which case it is added to the graph with probability p. Thus, we may as well pretend that Decider samples a random graph G &#8764; G(n, p) at the beginning of the game and answers YES to a pair {x, y} if and only if x &#8764; y in G. The final graph produced is thus some subgraph G &#8242; &#8838; G of the random graph G, obtained by removing certain edges from triangles. In fact, it is easily seen that the final graph G &#8242; must be a reachable subgraph of G.</p><p>We will show that w.h.p., every reachable subgraph of G &#8764; G(n, p) has independence number less than t. Since t &#8804; 1000 &#8730; n log n, this completes the proof. For each S &#8834; V (G) of order t, define X(S) to be the event that not more than t 2 /10 pairs of distinct vertices in S have a common neighbor outside S. By Lemma 3, w.h.p. all the events X(S) occur.</p><p>Let I(S) be the event that there exists a reachable subgraph H &#8838; G in which S is an independent set. Conditioning on X(S), at most t 2 /10 pairs of vertices in S have common neighbors outside S. Note that if G[S] contains an edge (u, v) with no common neighbor outside S, then S is not an independent set in any reachable subgraph H &#8834; G. This is because either (u, v) &#8712; E(H) or else the two edges that form a triangle with (u, v) must both lie in E(H). In either case there is an edge in H[S].</p><p>Thus, since there are at least t 2 -t 2 10 pairs in S chosen by Proposer,</p><p>We now compute</p><p>Pr S I(S) &#8804; Pr S X(S) + Pr S X(S) &#8743; S I(S) .</p><p>The first summand goes to zero by Lemma 3. The second can be bounded by a union of events of low probability, as follows.</p><p>&#8804; e 2t log t &#8226; e -5t log t .</p><p>It follows that w.h.p., none of the events I(S) occur. That is, there is no reachable subgraph of G &#8764; G(n, p), and Proposer cannot win.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Proposer's winning strategy</head><p>In the previous section, we showed that Decider can win w.h.p. when s = &#8486;( &#8730; n log n). In this section, we show that Proposer can match this bound (up to the constant factor), by exhibiting a strategy that yields an independence number of &#8486;( &#8730; n log n) w.h.p.</p><p>Theorem 5. Proposer has a randomized strategy such that w.h.p., no matter how Decider plays, the final graph produced will contain an independent set of order at least 1 1000</p><p>&#8730; n log n.</p><p>We now formally describe Proposer's strategy. In the next subsection, we introduce an auxiliary game and its analysis, which will be crucial to the analysis of this strategy, and then we prove that this strategy indeed wins w.h.p. in Section 4. , respectively (we can assume for simplicity that n is divisible by 6). Everything significant Proposer does will be between U and V , while A and B will only be used for "clean-up" to simplify the analysis.</p><p>Proposer divides the game into n/6 periods. At the beginning of period i, Proposer first compiles a list L i of open pairs to propose in this period, and then orders L i uniformly at random. During the period, Proposer proposes the pairs in L i one at a time in this order. By the choice of L i , regardless of how Decider plays during period i, none of the pairs in L i will be closed, so Proposer will be able to propose all of them regardless of the chosen order or of Decider's choices.</p><p>Proposer's list L i consists of all the pairs {u i , v j } for j &gt; i, together with all the pairs {v i , v j } with j &gt; i that are open at the beginning of period i. Additionally, Proposer ensures that each period has length n/3 by adding some number of pairs {a &#8467; , b &#8467; &#8242; } to L i until |L i | = n/3. The list L i is designed so that all pairs in L i are open at the start of period i and remain open until they are proposed. Moreover, since the induced subgraph on A &#8746; B stays bipartite throughout this whole process, all of the "clean-up" pairs in L i will remain open as well, and since we made A and B sufficiently large, Proposer will always have enough pairs to propose to ensure that |L i | = n/3 for all periods. Finally, as stated above, once Proposer has compiled L i , it is ordered uniformly at random, and Proposer proposes the pairs in L i according to that order.</p><p>Once Proposer has done this for periods 1, 2, . . . , n/6, there will be many pairs that are still open and that Proposer has not yet proposed. Proposer will propose these in an arbitrary order; no matter what happens at this "endgame" stage, Proposer will have already guaranteed a sufficiently large independent set inside V , which will remain independent throughout the remainder of the game.</p><p>For the analysis, we will also want to group periods into epochs. To do this, we will pick a sequence of decreasing positive constants &#949; 1 , &#949; 2 , . . . , &#949; m summing to 1/6, and declare the first epoch to consist of the first &#949; 1 n periods, the second epoch to consist of the subsequent &#949; 2 n periods, and so on. The number of epochs, m, will be a very slowly growing function of n. Let I k &#8838; {1, . . . , n/6} denote the set of periods comprising epoch k, and let</p><p>denote the set of vertices in U and V that are the roots of those pairs considered in epoch k.</p><p>In order to prove that this strategy works w.h.p., we will need a probabilistic tool which we call the bucket lemma, and which is stated and proved in the next section. It describes the behavior of a different game, which is designed to model one period of Ramsey, Paper, Scissors from Decider's perspective. The bucket lemma can be thought of as an adaptive concentration inequality akin to Azuma's inequality; it says that even if an adversary has certain weak control over a sequence of random variables, he can't make it stray too far from its mean w.h.p.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Coins and Buckets</head><p>We begin with a straightforward tight concentration lemma. To prove it, we will also need the following concentration lemma of Bohman <ref type="bibr">[2]</ref>, which is a generalization of Azuma's inequality; it says that if we have a martingale whose differences are bounded by different amounts from above and below, then we get concentration of the same order as a martingale with differences bounded by the geometric mean of the bounds.</p><p>where 0 &lt; c 1 &#8804; c 2 /10 are constants. Let 0 &lt; &#955; &lt; mc 1 . Then</p><p>Given integers a, b &gt; 0, define X a,b to be the following random variable with mean zero:</p><p>Lemma 7. Suppose a, b, &#957; 0 , &#957; are positive integers with a &#8804; b, &#957; = a + b, and &#957; 0 &#8804; &#957;, and X 1 , . . . , X &#957; are &#957; independent random variables identically distributed as X a,b . Let S m = i&#8804;m X i . Then, for all 0 &lt; t &lt; a b ,</p><p>Proof. Fix an m &#8805; &#957; 0 , and notice that S 0 , S 1 , . . . , S m is a martingale. The martingale {S j } is (1/a)-Lipschitz, so by Azuma's inequality, for all t &gt; 0,</p><p>We will use this bound when a &#8805; b/10. In the case that a &lt; b/10, we use instead Lemma 6 with &#955; = tm/a, c 1 = 1/b, and c 2 = 1/a. The condition &#955; &lt; mc 1 holds because we assumed t &lt; a/b, so we have</p><p>Together with the fact that b &#8805; &#957;/2, inequalities (1) and <ref type="bibr">(2)</ref> show that</p><p>for all a &#8804; b.</p><p>In particular, applying the union bound over all m &#8805; &#957; 0 to (3), we have that</p><p>.</p><p>Because e -x &#8804; 1x/2 for x &#8712; [0, 1], and because</p><p>we can bound 1e -&#957;t 2 20a &#8805; &#957;t 2 /40a &#8805; t 2 /20 for all 0 &lt; t &lt; a/b. Thus,</p><p>as desired.</p><p>Consider the following game, which we call Coins and Buckets, and which we will use to model a single period of Ramsey, Paper, Scissors in which Proposer plays randomly. The game begins with two buckets A, B of sizes a, b respectively, and a total of &#957; = a + b coins to divide among them. Before the game starts, Proposer picks a random set I from [&#957;]   a (but does not reveal the choice of the set to Decider). On step i of the game, a coin is placed into one of two buckets: bucket A if i &#8712; I, and bucket B otherwise. Before the coin is placed, Decider decides whether it is placed inside heads or tails, but Decider does not find out which bucket the coin enters until after the choice. Decider's goal is to make the distribution of heads in the buckets as uneven as possible. Namely, if we let h be the total number of coins placed heads-up, and h A the number of coins in bucket A that are heads-up, Decider wishes to maximize the error parameter |h Aah/&#957;|, which we call his score.</p><p>We also pick a threshold &#957; 0 such that we consider the game a forfeit if Decider picks heads fewer than &#957; 0 times-that is, Decider receives a score of zero. Call this game the Coins and Buckets game with parameters (a, &#957;, &#957; 0 ).</p><p>Under these conditions, we show that in Coins and Buckets, regardless of how Decider plays, w.h.p. the density of heads in the first bucket is not far from the overall density of heads.</p><p>Lemma 8. Suppose Decider plays a game of Coins and Buckets with parameters (a, &#957;, &#957; 0 ) where a &#8804; &#957;/2. If h A is the number of heads in the first bucket at the end of the game and h is the total number of heads, then for any 0 &lt; t &lt; a/&#957;,</p><p>Proof. Of course we may assume that Decider chooses heads at least &#957; 0 times. Let b = &#957;a. Instead of playing under the assumption that Proposer picks a fixed sequence out of [&#957;]  a , we will simplify our analysis by considering the modification in which each coin is placed into the first bucket with probability a &#957; independently; our first task is to show that this simplification is allowable, namely that a strong bound for this simplified model implies the desired bound in the original setting. To that end, let E be the event that exactly a of the coins fall in the first bucket in this new setting.</p><p>The factorial function satisfies</p><p>for all &#957; &#8805; 1. Thus, we find that</p><p>where in the last step we used the AM-GM inequality in the form &#8730; ab &#8804; &#957;/2. The original formulation of Coins and Buckets can be obtained by the modified version by conditioning on event E, and the probability of E is at least 1/2 &#8730; &#957;. Thus, it will suffice to show that in the modified setting,</p><p>because conditioning on E can multiply the failure probability by at most a factor of Pr[E] -1 (by Bayes' Theorem). We turn to proving (4) now.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Let h (i)</head><p>A and h</p><p>B be the number of heads in buckets A and B respectively after i turns of the game. Consider the sequence of random variables</p><p>On each of Decider's turns, there are two possibilities. If Decider picks tails, then Z i remains unchanged. On the other hand, if Decider picks heads, the coin goes into bucket A with probability a/(a + b), and bucket B otherwise. Therefore, in this case, Z i+1 -Z i is distributed as X a,b , the random variable defined for Lemma 7.</p><p>We are ready to apply Lemma 7. To see why, imagine that we "pre-process" some of the randomness in the game, as follows. Before the start of the game, we sample &#957; independent random variables X 1 , . . . , X &#957; identically distributed as X a,b . Keep a count &#8467; of the total number of heads so far. On each turn that Decider picks tails, we place the coin into a bucket at random as before. However, on the turn Decider picks heads for the &#8467;th time, we place the coin into bucket A if and only if X &#8467; = 1/a. From Decider's perspective, this is the same game, since he doesn't know that the randomness was "pre-processed". Under these assumptions, Z i will be equal to exactly S h (i) = j&#8804;h (i) X j where h (i) is the number of heads placed up through turn i. It follows that Z n = S h , where h = h (&#957;) is the total number of heads placed. But then we can apply Lemma 7 to see that for 0 &lt; t &lt; a/&#957; &lt; a/b,</p><p>We now finish by noting that</p><p>since<ref type="foot">foot_0</ref> &#957; h is a convex combination of 1 a h A and 1 b h B , and so ( <ref type="formula">5</ref>) implies ( <ref type="formula">4</ref>), as desired.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Proposer's strategy works</head><p>We are ready to show that the strategy described at the beginning of Section 4 indeed allows Proposer to produce an independent set of order 1 1000 &#8730; n log n w.h.p.</p><p>Recall the setup: G i = (V, E i ) is the graph built on turn i. The vertices are labeled u i , v i , a &#8467; , b &#8467; where 1 &#8804; i &#8804; n/6 and 1 &#8804; &#8467; &#8804; n/3. Period i consists of a total of n/3 turns, during which Proposer proposes only pairs of the forms {u i , v j }, {v i , v j }, with j &gt; i and "clean-up" pairs {a &#8467; , b &#8467; &#8242; }. Proposer will propose all pairs of the first form {u i , v j }, but only the open pairs among {v i , v j }, and orders the choices completely at random. Let us fix some strategy for Decider; our goal is to show that no matter what this strategy is, Proposer will win w.h.p. We define p i to be the fraction (out of n/3) of Decider's answers which are YES during period i; p i will depend on Decider's strategy, and potentially also on the random permutation chosen by Proposer, or more generally on the state of the game throughout period i.</p><p>Recall also that we organized the periods into epochs I k &#8838; {1, . . . , n/6} of decreasing order</p><p>We define P k to be the average of p i over all i &#8712; I k . Finally, we define o k to be the open density inside V k at the end of epoch k -1; formally, if i * is the maximum i &#8712; I k-1 , then we define</p><p>The main result that we need is the following theorem of Shearer 1 <ref type="bibr">[14]</ref>, improving by a constant factor earlier work of Ajtai, Koml&#243;s, and Szemer&#233;di <ref type="bibr">[1]</ref>. As usual, &#945;(G) denotes the order of the largest independent set in G.</p><p>Theorem 9. If G is a triangle-free graph on n vertices and with average degree d, then</p><p>We first show, using Theorem 9, that if P k is too small for a given epoch, Proposer wins in this epoch immediately.</p><p>Lemma 10. Suppose that for some k, we have</p><p>Then with probability at least 1e -&#8486;( &#8730; n/(log n) 3 ) , Proposer will win the game; in fact, at the end of epoch k, V k will contain w.h.p. an independent set of order 1 1000 &#8730; n log n, and since Proposer will have proposed or closed every pair in this independent set by the end of epoch k, it will remain independent until the end of the game.</p><p>Proof. First, suppose that at the end of epoch k, the average degree inside V k is at most 3. By Tur&#225;n's Theorem, a graph on &#949; k n vertices with average degree d has independence number at least &#949; k n/(d + 1), so V k will contain an independent set of size at least</p><p>for n sufficiently large. Thus, from now on, we may assume that the average degree inside V k is more than 3 at the end of epoch k.</p><p>Next, suppose that o k &#8804; 1/n. This implies that at most |V k | 2 /n edges can be built inside V k , so the average degree in V k is at most |V k |/n = &#949; k &lt; 3. This contradicts the above assumption, so we may assume that o k &gt; 1/n.</p><p>Next, we show that if P k &lt; &#949; k / &#8730; n, then Proposer is guaranteed to win in epoch k.</p><p>Indeed, the total number of edges built in the entire epoch is</p><p>In particular, at most this many edges are built inside V k . It follows that the average degree in V k at the end of the game is between 3 and P k n/3, so by Theorem 9, V k will contain an independent set of order at least</p><p>where we use the fact that the function log d/d is monotonically decreasing for d &gt; 3. If</p><p>&#8730; n log n for n sufficiently large, and Proposer wins. Now, using this fact we may assume that Decider chooses YES at least an &#949; k / &#8730; n fraction of the time throughout the period k. We can now apply Lemma 8 to obtain a stronger bound on the total number of edges built in V k . Namely, let A be the set of all pairs within V k that are proposed in epoch k. </p><p>. Also, a total number of &#949; k n 2 /3 turns occur in the epoch, and we know from the preceding argument that Decider answers YES at least &#949; k / &#8730; n of the time, so we can take &#957; 0 = &#949; 2 k n 3/2 /3 as a lower bound on the number of YES answers Decider gives.</p><p>Thus, we can think of epoch k as an instance of the Coins and Buckets game with parameters</p><p>Here we think of the set A of pairs as bucket A in the Coins and Buckets game, and the condition a &lt; &#957;/2 is satisfied because &#949; k &lt; 1 6 and o k &#8804; 1. Each pair in A proposed corresponds to a coin placed in the bucket A, each pair proposed outside A corresponds to a coin in the bucket B, and Decider's answers of heads/tails correspond to YES/NO.</p><p>We now apply Lemma 8 with t = a/2&#957; = 3o k &#949; k /4 to this game. By our choice of &#957; 0 , we know that h &#8805; &#957; 0 always, so we find that</p><p>where the second inequality follows from our assumption that o k &#8805; 1/n, and the final inequality uses the fact that &#949; k &#8805; 1/ log n for n sufficiently large. Therefore, we find that the number of edges actually in V k is bounded above by</p><p>with probability at least 1e -&#8486;( &#8730; n/(log n) 3 ) . Thus the average degree in V k at the end of the game will be at most 3 2 P k o k &#949; k n. By Theorem 9 again, V k must contain an independent set of order at least</p><p>where we used the bounds P k o k &#8804; 200/ &#8730; n and &#949; k &#8805; 1/ log n. Thus, in this regime of P k , Proposer wins with probability at least 1e -&#8486;( &#8730; n/(log n) 3 ) . This lemma implies that for Decider to have a reasonable hope of winning, Decider must always ensure that P k &#8805; 200/(o k &#8730; n). Intuitively, a large P k implies that many edges will be built between U k and V k+1 , which means that many pairs in V k+1 will be closed off, so o k+1 must be considerably smaller than o k . Thus, to ensure P k+1 &#8805; 200/(o k+1 &#8730; n), P k+1 must be considerably larger than P k , and so on. This cannot be sustained for long because P k must always be bounded by 1, so eventually Decider will run out of room and lose the game. The next two lemmas makes this rigorous. We will need further notation; define o k,i to be the fraction of pairs in V k</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2</head><p>that are open at the end of period i.</p><p>Lemma 11. Suppose that k &#8805; 2, i &#8712; I k-1 , and p i , o k,i-1 satisfy p i &#8805; P k-1 /2 and</p><p>then with probability at least 1e -&#8486;(log n) 2 ,</p><p>Proof. We set up the conditions for Coins and Buckets to apply Lemma 8. Suppose L i is the set of all pairs Proposer proposes during period i, consisting of all the pairs {u i , v j } with j &gt; i, all the open pairs {v i , v j } with j &gt; i, and some filler pairs {a &#8467; , b &#8467; &#8242; } to make at total of n/3 turns. Suppose A &#8838; L i has order (log n)</p><p>2 /P k-1 &#8804; |A| &#8804; |L i |/2. The total number of YES responses given in this period is p i |L i | &#8805; P k-1 |L i |/2, so we can define &#957; 0 = P k-1 |L i |/2. Thus we can think of this period as a game of Coins and Buckets with parameters (a, &#957;, &#957; 0 ) = (|A|, |L i |, P k-1 |L i |/2). Then, by Lemma 8, the density of YES responses made by Decider to pairs in A will be close to the density of YES responses made by Decider to all pairs in A i . Specifically, taking t = a 2&#957; = 3|A| 2n</p><p>and writing y A for the number of YES answers given to edges in A, we have</p><p>with probability at least</p><p>by Lemma 8. In the inequality, we use the trivial bound |A| &#8805; 1 and our assumption that |A| &#8805; (log n) 2 /P k-1 . We will apply bound (7) a total of O(n) times, so by the union bound, all of them will hold with probability at least 1e -&#8486;(log n) 2 . Now, we count the number of open pairs in V k before and after period i. Thinking of the set O i of open pairs after turn i as itself a graph on V , let O k,i = O i [V k ] be the induced subgraph of O i consisting of open pairs in V k after period i. The pairs of O k,i-1 that are closed off after period i are exactly those pairs {v j , v j &#8242; } for which Decider answers YES to both {u i , v j } and {u i , v j &#8242; }, or else to both {v i , v j } and {v i , v j &#8242; }. In particular, to lower-bound the number of pairs that are closed off in period i, it suffices to lower-bound the number of j, j &#8242; for which Decider answers YES to both {u i , v j } and {u i , v j &#8242; }.</p><p>We write N O k,i-1 (v j ) for the neighborhood of v j in O k,i-1 , and d k,i-1 (v j ) for the order of this neighborhood. We will apply inequality <ref type="bibr">(7)</ref> to the set A(v j ) = {u i } &#215; N O k,i-1 (v j ) consisting of all pairs from u i to N O k,i-1 (v j ). We can only apply it to those v j with</p><p>note that the inequality |A(v j )| &#8804; |L i |/2 holds for all v j , since at most half the pairs in L i are between u i and V k , and in particular at most half are in A(v j ). Then for those v j with d k,i-1 (v j ) &#8805; (log n) 2 /P k-1 , inequality (7) tells us that with probability at least 1e -&#8486;(log n) 2 , the number of neighbors j &#8242; &#8712; N O k,i (v j ) for which Decider answers YES to</p><p>). We will write y N O k,i-1 (v j ) for this number of YES answers, which is a shorthand for</p><p>We also divide the vertices v j &#8712; V k into dyadic intervals based on degree. That is, let D &#8467; be the set of all vertices v j &#8712; V k with d k,i-1 (v j ) lying in [2 &#8467;-1 , 2 &#8467; ), where 1 &#8804; &#8467; &#8804; log 2 n. Applying inequality (7) again, to the set</p><p>2 )p i |D &#8467; | with probability at least 1e -&#8486;(log n) 2 . Here we again write y D &#8467; as a shorthand for y {u i }&#215;D &#8467; , the number of YES answers made to pairs of the form {u i , v j } where v j &#8712; D &#8467; .</p><p>Throughout, we applied inequality (7) a total of O(n) times, so everything still holds with probability 1e -&#8486;(log n) 2 . Putting all this together, we can lower bound the number of open pairs that are closed off in period i, by summing over &#8467;. For each &#8467; &#8805; 1+log((log n) 2 /P k-1 ), we have that every v j &#8712; D &#8467; has degree at least 2 &#8467;-1 &#8805; (log n) 2 /P k-1 , so we can apply our above lower bound for y N O k,i-1 (v j ) . Suppose additionally that &#8467; is such that |D &#8467; | &#8805; (log n) 2 /P k-1 , so that we also have a bound for y D &#8467; . Then we find that for such an &#8467;, the number of pairs {v j , v j &#8242; } closed off with v j &#8712; D &#8467; is at least</p><p>We now sum this up over all &#8467; as above, namely all &#8467; &#8805; 1 + log((log n)</p><p>2 /P k-1 ) with |D &#8467; | &#8805; (log n) 2 /P k-1 ; call such values of &#8467; good. Doing so gives us a lower bound on the total number of open pairs in V k closed off during period i; note that when we sum up, we might double-count pairs {v j , v j &#8242; } if v j and v j &#8242; are in different parts of the dyadic partition {D &#8467; }. Thus, we need to divide by 2 when summing, and find that the total number of open pairs in V k that are closed during period i is at least</p><p>So it suffices to bound v j &#8712;D &#8467; d k,i-1 (v j ) for all &#8467; that are not good. If &#8467; &#8804; log((log n) 2 /P k-1 ), then</p><p>On the other hand, if &#8467; is not good since |D &#8467; | &lt; (log n) 2 /P k-1 , then we find that</p><p>Since there are are at most log n values of &#8467;, and in particular at most log n values of &#8467; that are not good, we find that</p><p>We can now plug this back into <ref type="bibr">(8)</ref>. Doing so, we find that the number of open pairs in V k closed off during period i is at least</p><p>Putting this together, we find that</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Dividing out by</head><p>|V k | 2 gives the desired result for o k,i . Now we are ready to accumulate the density decrements from this lemma, to show that o k is small in terms of P k-1 , as before. Recall that o k is the open density in V k just after the epoch k -1. Lemma 12. With probability at least 1e -&#8486;(log n) 2 , for all k &#8805; 2,</p><p>Proof. We will apply Lemma 11 using the fact that P k-1 is the average value of p i across all</p><p>, then we are done, since the open density can never increase. Also, we will ignore every period i where p i &lt; P k-1 /2, and let I * k-1 be the indices of the remaining periods.</p><p>The conditions of Lemma 11 are satisfied for each i &#8712; I * k-1 , so we have for these i,</p><p>Iterating over all i &#8712; I * k-1 and noting that the open density is initially at most 1, it follows that</p><p>There are &#949; k-1 n periods in epoch k -1, so the total sum of p i over i &#8712; I k-1 is P k-1 &#8226; &#949; k-1 n, while the sum of those p i for which p i &lt; P k-1 /2 is at most half this amount. Thus, together with the Cauchy-Schwarz inequality,</p><p>Plugging into <ref type="bibr">(9)</ref>, this implies that o k &#8804; e -&#949; k-1 P 2 k-1 n/64 , as desired.</p><p>From Lemmas 10 and 12, we can complete the proof of Theorem 5.</p><p>Proof of Theorem 5. We pick m = log * (n)+1, where log * (n) is the binary iterated logarithm function. We then pick &#949; k = 1/(6 &#8226; 2 k ) for 1 &#8804; k &#8804; m -1 and &#949; m = &#949; m-1 , so that &#949; k = 1/6. Moreover, since &#949; k &#8805; &#949; m &#8805; 2 -log * (n) /6, we get that &#949; k &#8805; 1/ log n for all n sufficiently large, as we required.</p><p>For each k, we may assume by Lemma 10 that </p><p>Suppose that for some k &#8804; m -1, the first term in the maximum in <ref type="bibr">(10)</ref> is larger. Then o k &#8804; 2 k (log n) 3 /10 &#8730; n. Applying (10) again, o k+1 &#8804; max 2 2k+1 (log n) 6 100n , exp -10000n 2 3k+1 (log n) 6 &#8804; 200 &#8730; n for n sufficiently large, since 2 3k = O(log n) for k = O(log * n). It then follows by Lemma 10 that Proposer can win in epoch k + 1 w.h.p.</p><p>Thus, we may assume that the second expression on the right hand side of ( <ref type="formula">10</ref>) is larger for every k &#8804; m -1. Writing x k =log o k , the bound becomes x k &#8805; 100 2 k e 2x k-1 , starting from x 1 = 0. It is easy to prove by induction that x k &#8805; t k (2) for all k &#8805; 2, where t k (2) is a tower of 2s of height k. Thus, x m-1 &#8805; t log * n (2) &#8805; n. But then, o m-1 = e -x m-1 &#8804; e -n , which implies that o m-1 = 0 since it must be a nonnegative rational with denominator less than n 2 . Proposer wins by epoch m -1 in this case.</p><p>We have shown that Proposer guarantees w.h.p. the existence of an independent set of size 1 1000 &#8730; n log n, as desired.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Concluding Remarks</head><p>A natural open problem is to close the constant-factor gap in Theorem 1. If we define RP S(n) to be the largest s for which Proposer can win with probability at least 1/2 regardless of how Decider chooses to play, then Theorem 1 implies that RP S(n) = &#920;( &#8730; n log n). We have made no attempt to optimize the constants 10 -3 and 10 3 that our proof provides, but even if we did, it is unlikely that they would match, meaning that we still don't fully understand the full asymptotic behavior of RP S(n). In the language of thresholds, Theorem 1 asserts that RP S(n) exhibits a coarse threshold; we conjecture that there is a sharp threshold.</p><p>Conjecture 13. There is some constant C &gt; 0 so that RP S(n) = (C + o(1)) &#8730; n log n.</p><p>Further, for every &#949; &gt; 0, Proposer can win w.h.p. if s &lt; (C&#949;) &#8730; n log n, while Decider can win w.h.p. if s &gt; (C + &#949;) &#8730; n log n w.h.p.</p><p>Moreover, we suspect that Decider's strategy of playing randomly is optimal. Finally, there is a natural extension of Ramsey, Paper, Scissors to the case where we forbid a subgraph other than the triangle. Specifically, for any fixed graph H, we define the H-RPS game to be exactly as before, except that Proposer can never propose a pair that, if added, would form a copy of H. As before, Proposer wins if the independence number is at least s when Proposer has no legal moves remaining. From this, we can define the quantity RP S(H; n) to be the maximum s for which Proposer can win with probability at least 1 2 . Thus, the above discussion concerns the special case RP S(K 3 ; n), and it would be natural to search for analogues of Theorem 1 for RP S(H; n), where H is some graph other than K 3 . A natural question is whether Decider's random strategy is still close to optimal.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0"><p>Shearer's bound is of the form &#945;(G) &#8805; nf (d) with f (d) = (1 + o(1)) log d/d, and one can check that f (d) &#8805;</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1"><p>log d/(3d) for all d.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_2"><p>PDF Studio -PDF Editor for Mac, Windows, Linux. For Evaluation. https://www.qoppa.com/pdfstudio</p></note>
		</body>
		</text>
</TEI>
