<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Online Matching with General Arrivals</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>2019 4th Quarter (CY)</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10121530</idno>
					<idno type="doi"></idno>
					<title level='j'>IEEE Symposium on Foundations of Computer Science</title>
<idno></idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Buddhima Gamlath</author><author>Michael Kapralov</author><author>Andreas Maggiori</author><author>Ola Svensson</author><author>David Wajc</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[The online matching problem was introduced by Karp, Vazirani and Vazirani nearly three decades ago. In that seminal work, they studied this problem in bipartite graphs with vertices arriving only on one side, and presented optimal deterministic and randomized algorithms for this setting. In comparison, more general arrival models, such as edge arrivals and general vertex arrivals, have proven more challenging, and positive results are known only for various relaxations of the problem. In particular, even the basic question of whether randomization allows one to beat the trivially-optimal deterministic competitive ratio of 1/2 for either of these models was open. In this paper, we resolve this question for both these natural arrival models, and show the following. For edge arrivals, randomization does not help | no randomized algorithm is better than 1/2 competitive.For general vertex arrivals, randomization helps | there exists a randomized (1/2+ Ω(1))-competitive online matching algorithm.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Matching theory has played a prominent role in the area of combinatorial optimization, with many applications <ref type="bibr">[21,</ref><ref type="bibr">23]</ref>. Moreover, many fundamental techniques and concepts in combinatorial optimization can trace their origins to its study, including the primal-dual framework <ref type="bibr">[19]</ref>, proofs of polytopes' integrality beyond total unimodularity <ref type="bibr">[8]</ref>, and even the equation of efficiency with polytime computability <ref type="bibr">[9]</ref>.</p><p>Given the prominence of matching theory in combinatorial optimization, it comes as little surprise that the maximum matching problem was one of the first problems studied from the point of view of online algorithms and competitive analysis. In 1990, Karp, Vazirani, and Vazirani <ref type="bibr">[18]</ref> introduced the online matching problem, and studied it under one-sided bipartite arrivals. For such arrivals, Karp et al. noted that the trivial 1 /2-competitive greedy algorithm (which matches any arriving vertex to an arbitrary unmatched neighbor, if one exists) is optimal among deterministic algorithms for this problem. More interestingly, they provided an elegant randomized online algorithm for this problem, called ranking, which achieves an optimal (1 -1 /e) competitive ratio. (This bound has been re-proven many times over the years <ref type="bibr">[2,</ref><ref type="bibr">6,</ref><ref type="bibr">7,</ref><ref type="bibr">11,</ref><ref type="bibr">13]</ref>.) Online matching and many extensions of this problem under one-sided bipartite vertex arrivals were widely studied over the years, both under adversarial and stochastic arrival models. See recent work <ref type="bibr">[5,</ref><ref type="bibr">15,</ref><ref type="bibr">16,</ref><ref type="bibr">17]</ref> and the excellent survey of Mehta <ref type="bibr">[22]</ref> for further references on this rich literature.</p><p>Despite our increasingly better understanding of one-sided online bipartite matching and its extensions, the problem of online matching under more general arrival models, including edge arrivals and general vertex arrivals, has remained staunchly defiant, resisting attacks. In particular, the basic questions of whether the trivial 1 /2 competitive ratio is optimal for the adversarial edgearrival and general vertex-arrival models have remained tantalizing open questions in the online algorithms literature. In this paper, we answer both of these questions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Prior Work and Our Results</head><p>Here we outline the most relevant prior work, as well as our contributions. Throughout, we say an algorithm (either randomized or fractional) has competitive ratio &#945;, or equivalently is &#945;-competitive, if the ratio of the algorithm's value (e.g., expected matching size, or overall value, e x e ) to OPT is at least &#945; 1 for all inputs and arrival orders. As is standard in the online algorithms literature on maximization problems, we use upper bounds (on &#945;) to refer to hardness results, and lower bounds to positive results.</p><p>Edge Arrivals. Arguably the most natural, and the least restricted, arrival model for online matching is the edge arrival model. In this model, edges are revealed one by one, and an online matching algorithm must decide immediately and irrevocably whether to match the edge on arrival, or whether to leave both endpoints free to be possibly matched later.</p><p>On the hardness front, the problem is known to be strictly harder than the one-sided vertex arrival model of Karp et al. <ref type="bibr">[18]</ref>, which admits a competitive ratio of 1 -1 /e &#8776; 0.632. In particular, Epstein et al. <ref type="bibr">[10]</ref> gave an upper bound of 1 1+ln 2 &#8776; 0.591 for this problem, recently improved by Huang et al. <ref type="bibr">[16]</ref> to 2-&#8730; 2 &#8776; 0.585. (Both bounds apply even to online algorithms with preemption; i.e., allowing edges to be removed from the matching in favor of a newly-arrived edge.) On the positive side, as pointed out by Buchbinder et al. <ref type="bibr">[3]</ref>, the edge arrival model has proven challenging, and results beating the 1 /2 competitive ratio were only achieved under various relaxations, including: random order edge arrival <ref type="bibr">[14]</ref>, bounded number of arrival batches <ref type="bibr">[20]</ref>, on trees, either with or without preemption <ref type="bibr">[3,</ref><ref type="bibr">24]</ref>, and for bounded-degree graphs <ref type="bibr">[3]</ref>. The above papers all asked whether 3+&#966; 2 &#8776; 0.593. Clearly, the general vertex arrival model is no harder than the online edge arrival model but is it easier ? The answer is "yes" for fractional algorithms, as shown by combining our Theorem 1.1 with the 0.526-competitive fractional online matching algorithm under general vertex arrivals of Wang and Wong <ref type="bibr">[25]</ref>. For integral online matching, however, the problem has proven challenging, and the only positive results for this problem, too, are for various relaxations, such as restriction to trees, either with or without preemption <ref type="bibr">[3,</ref><ref type="bibr">4,</ref><ref type="bibr">24]</ref>, for bounded-degree graphs <ref type="bibr">[3]</ref>, or (recently) allowing vertices to be matched during some known time interval <ref type="bibr">[15,</ref><ref type="bibr">16]</ref>.</p><p>We elaborate on the last relaxation above. In the model recently studied by Huang et al. <ref type="bibr">[15,</ref><ref type="bibr">16]</ref> vertices have both arrival and departure times, and edges can be matched whenever both their endpoints are present. (One-sided vertex arrivals is a special case of this model with all online vertices departing immediately after arrival and offline vertices departing at &#8734;.) We note that any &#945;-competitive online matching under general vertex arrivals is &#945;-competitive in the less restrictive model of Huang et al. As observed by Huang et al., for their model an optimal approach might as well be greedy; i.e., an unmatched vertex v should always be matched at its departure time if possible. In particular, Huang et al. <ref type="bibr">[15,</ref><ref type="bibr">16]</ref>, showed that the ranking algorithm of Karp et al. achieves a competitive ratio of &#8776; 0.567. For general vertex arrivals, however, ranking (and indeed any maximal matching algorithm) is no better than 1 /2 competitive, as is readily shown by a path on three edges with the internal vertices arriving first. Consequently, new ideas and algorithms are needed.</p><p>The natural open question for general vertex arrivals is whether a competitive ratio of ( 1 /2+&#8486;(1)) is achievable by an integral randomized algorithm, without any assumptions (see e.g., <ref type="bibr">[25]</ref>). In this work, we answer this question in the affirmative: Theorem 1.2. There exists a ( 1 /2 + &#8486;(1))-competitive randomized online matching algorithm for general adversarial vertex arrivals.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Our Techniques</head><p>Edge Arrivals. All prior upper bounds in the online literature <ref type="bibr">[3,</ref><ref type="bibr">10,</ref><ref type="bibr">11,</ref><ref type="bibr">16,</ref><ref type="bibr">18]</ref> can be rephrased as upper bounds for fractional algorithms; i.e., algorithms which immediately and irrevocably assign each edge e a value x e on arrival, so that x is contained in the fractional matching polytope, P = { x 0 | e&#8715;v x e 1 &#8704;v &#8712; V }. With the exception of <ref type="bibr">[3]</ref>, the core difficulty of these hard instances is uncertainty about "identity" of vertices (in particular, which vertices will neighbor which vertices in the following arrivals). Our hardness instances rely on uncertainty about the "time horizon". In particular, the underlying graph, vertex identifiers, and even arrival order are known to the algorithm, but the number of edges of the graph to be revealed (to arrive) is uncertain. Consequently, an &#945;-competitive algorithm must accrue high enough value up to each arrival time to guarantee a high competitive ratio at all points in time. As we shall show, for competitive ratio 1 /2 + &#8486;(1), this goal is at odds with the fractional matching constraints, and so such a competitive ratio is impossible. In particular, we provide a family of hard instances and formulate their prefixcompetitiveness and matching constraints as linear constraints to obtain a linear program whose objective value bounds the optimal competitive ratio. Solving the obtained LP's dual, we obtain by weak duality the claimed upper bound on the optimal competitive ratio.</p><p>General Vertex Arrivals. Our high-level approach here will be to round online a fractional online matching algorithm's output, specifically that of Wang and Wong <ref type="bibr">[25]</ref>. While this approach sounds simple, there are several obstacles to overcome. First, the fractional matching polytope is not integral in general graphs, where a fractional matching may have value, e x e , some 3 /2 times larger than the optimal matching size. (For example, in a triangle graph with value x e = 1 /2 for each edge e.) Therefore, any general rounding scheme must lose a factor of 3 /2 on the competitive ratio compared to the fractional algorithm's value, and so to beat a competitive ratio of 1 /2 would require an online fractional matching with competitive ratio &gt; 3 /4 &gt; 1 -1 /e, which is impossible. To make matters worse, even in bipartite graphs, for which the fractional matching polytope is integral and offline lossless rounding is possible <ref type="bibr">[1,</ref><ref type="bibr">12]</ref>, online lossless rounding of fractional matchings is impossible, even under one-sided vertex arrivals <ref type="bibr">[5]</ref>.</p><p>Despite these challenges, we show that a slightly better than 1 /2-competitive fractional matching computed by the algorithm of <ref type="bibr">[25]</ref> can be rounded online without incurring too high a loss, yielding ( 1 /2 + &#8486;(1))-competitive randomized algorithm for online matching under general vertex arrivals.</p><p>To outline our approach, we first consider a simple method to round matchings online. When vertex v arrives, we pick an edge {u, v} with probability z u = x uv / Pr[u free when v arrives], and add it to our matching if u is free.</p><p>If u z u 1, this allows us to pick at most one edge per vertex and have each edge e = {u, v} be in our matching with the right marginal probability, x e , resulting in a lossless rounding. Unfortunately, we know of no better-than-1 /2-competitive fractional algorithm for which this rounding guarantees u z u 1.</p><p>However, we observe that, for the correct set of parameters, the fractional matching algorithm of Wang and Wong <ref type="bibr">[25]</ref> makes u z u close to one, while still ensuring a better-than-1 /2-competitive fractional solution. Namely, as we elaborate later in Section 3.3, we set the parameters of their algorithm so that u z u 1+O(&#949;), while retaining a competitive ratio of 1/2+O(&#949;). Now consider the same rounding algorithm with normalized probabilities: I.e., on v's arrival, sample a neighbor u with probability z &#8242; u = z u / max{1, u z u } and match if u is free. As the sum of z u 's is slightly above one in the worst case, this approach does not drastically reduce the competitive ratio. But the normalization factor is still too significant compared to the competitive ratio of the fractional solution, driving the competitive ratio of the rounding algorithm slightly below 1/2.</p><p>To account for this minor yet significant loss, we therefore augment the simple algorithm by allowing it, with small probability (e.g., say &#8730; &#949;), to sample a second neighbor u 2 for each arriving vertex v, again with probabilities proportional to z &#8242; u 2 : If the first sampled choice, u 1 , is free, we match v to u 1 . Otherwise, if the second choice, u 2 , is free, we match v to u 2 . What is the marginal probability that such an approach matches an incoming vertex v to a given neighbor u? Letting F u denote the event that u is free when v arrives, this probability is precisely</p><p>Here the first term in the parentheses corresponds to the probability that v matches to u via the first choice, and the second term corresponds to the same happening via the second choice (which is only taken when the first choice fails). Ideally, we would like (1) to be at least x uv for all edges, which would imply a lossless rounding. However, as mentioned earlier, this is difficult and in general impossible to do, even in much more restricted settings including one-sided bipartite vertex arrivals. We therefore settle for showing that (1) is at least x uv = Pr[F u ] &#8226; z u for most edges (weighted by x uv ). Even this goal, however, is challenging and requires a nontrivial understanding of the correlation structure of the random events F u . To see this, note that for example if the F w events are perfectly positively correlated, i.e., Pr[F w | F u ] = 1, then the possibility of picking e as a second edge does not increase this edge's probability of being matched at all compared to if we only picked a single edge per vertex. This results in e being matched with probability Pr</p><p>x uv / w z w , which does not lead to any gain over the 1 /2 competitive ratio of greedy. Such problems are easily shown not to arise if all F u variables are independent or negatively correlated. Unfortunately, positive correlation does arise from this process, and so we the need to control these positive correlations.</p><p>The core of our analysis is therefore dedicated to showing that even though positive correlations do arise, they are by and large rather weak. Our main technical contribution consists of developing techniques for bounding such positive correlations. The idea behind the analysis is to consider the primary choices and secondary choices of vertices as defining a graph, and showing that after a natural pruning operation that reflects the structure of dependencies, most vertices are most often part of a very small connected component in the graph. The fact that connected components are typically very small is exactly what makes positive correlations weak and results in the required lower bound on (1) for most edges (in terms of x-value), which in turn yields our 1 /2 + &#8486;(1) competitive ratio.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Edge Arrivals</head><p>In this section we prove the asymptotic optimality of the greedy algorithm for online matching under adversarial edge arrivals. As discussed briefly in Section 1, our main idea will be to provide a "prefix hardness" instance, where an underlying input and the arrival order is known to the online matching algorithm, but the prefix of the input to arrive (or "termination time") is not. Consequently, the algorithm must accrue high enough value up to each arrival time, to guarantee a high competitive ratio at all points in time. As we show, the fractional matching constraints rule out a competitive ratio of 1 /2 + &#8486;(1) even in this model where the underlying graph is known.</p><p>Theorem 2.1. There exists an infinite family of bipartite graphs with maximum degree n and edge arrival order for which any online matching algorithm is at best 1  2 + 1 2n+2 -competitive.</p><p>Proof. We will provide a family of graphs for which no fractional online matching algorithm has better competitive ratio. Since any randomized algorithm induces a fractional matching algorithm, this immediately implies our claim. The n th graph of the family, G n = (U, V, E), consists of a bipartite graph with |U | = |V | = n vertices on either side. We denote by u i &#8712; U and v i &#8712; V the i th node on the left and right side of G n , respectively. Edges are revealed in n discrete rounds.</p><p>In round i = 1, 2, . . . , n, the edges of a perfect matching between the first i left and right vertices arrive in some order. I.e., a matching of u 1 , u 2 , . . . , u i and v 1 , v 2 , . . . , v i is revealed. Specifically, edges (u j , v i-j+1 ) for all i j arrive. (See Figure <ref type="figure">1</ref> for example.) Intuitively, the difficulty for an algorithm attempting to assign much value to edges of OP T is that the (unique) maximum matching OP T changes every round, and no edge ever re-enters OP T .</p><p>Figure <ref type="figure">1</ref>: G 5 , together with arrival order. Edges of current (prior) round are solid (dashed).</p><p>Consider some &#945;-competitive fractional algorithm A. We call the edge of a vertex w in the (unique) maximum matching of the subgraph of G n following round i the i th edge of w. For i j, denote by x i,j the value A assigns to the i th edge of vertex u j (and of v i-j+1 ); i.e., to (u j , v i-j+1 ). By feasibility of the fractional matching output by A, we immediately have that x i,j 0 for all i, j, as well as the following matching constraints for u j and v j . (For the latter, note that the i th edge of v i-j+1 is assigned value x i,j = x i,i-(i-j+1)+1 and so the i th edge of v j is assigned value x i,i-j+1 ). (u j matching constraint)</p><p>On the other hand, as A is &#945;-competitive, we have that after some k th round -when the maximum matching has cardinality k -algorithm A's fractional matching must have value at least &#945; &#8226; k. (Else an adversary can stop the input after this round, leaving A with a worse than &#945;-competitive matching.) Consequently, we have the following competitiveness constraints.</p><p>Combining constraints (2), ( <ref type="formula">3</ref>) and ( <ref type="formula">4</ref>) together with the non-negativity of the x i,k yields the following linear program, LP(n), whose optimal value upper bounds any fractional online matching algorithm's competitiveness on G n , by the above. maximize &#945; subject to:</p><p>To bound the optimal value of LP(n), we provide a feasible solution its LP dual, which we denote by Dual(n). By weak duality, any dual feasible solution's value upper bounds the optimal value of LP(n), which in turn upper bounds the optimal competitive ratio. Using the dual variables &#8467; j , r j for the degree constraints of the j th left and right vertices respectively (u j and v j ) and dual variable c k for the competitiveness constraint of the k th round, we get the following dual linear program. Recall here again that x i,i-j+1 appears in the matching constraint of v j , with dual variable r j , and so x i,j = x i,i-(i-j+1)+1 appears in the same constraint for v i-j+1 .) minimize n j=1 (&#8467; j + r j ) subject to:</p><p>We provide the following dual solution.</p><p>We start by proving feasibility of this solution. The first constraint is satisfied with equality. For the second constraint, as n k=i c k = 2(n-i+1) n(n+1) it suffices to show that &#8467; j + r i-j+1</p><p>for all i &#8712; [n], j &#8712; [i]. Note that if j &gt; n/2 + 1, then &#8467; j = r j = 0 &gt; n-2(j-<ref type="foot">foot_0</ref>) n(n+1) . So, for all j we have &#8467; j = r j n-2(j-1) n(n+1) . Consequently, &#8467; j + r i-j+1 n-2(j-1)</p><p>Non-negativity of the &#8467; j , r j , c k variables is trivial, and so we conclude that the above is a feasible dual solution.</p><p>It remains to calculate this dual feasible solution's value. We do so for n even, 1 for which</p><p>completing the proof.</p><p>Remark 1. Recall that Buchbinder et al. <ref type="bibr">[3]</ref> and Lee and Singla <ref type="bibr">[20]</ref> presented better-than-1 /2-competitive algorithms for bounded-degree graphs and bounded number of arrival batches. Our upper bound above shows that a deterioration of the competitive guarantees as the maximum degree and number of arrival batches increase (as in the algorithms of <ref type="bibr">[3,</ref><ref type="bibr">20]</ref>) is inevitable.</p><p>Remark 2. Recall that the asymptotic competitive ratio of an algorithm is the maximum c such that the algorithm always guarantees value at least ALG c &#8226; OP Tb for some fixed b &gt; 0. Our proof extends to this weaker notion of competitiveness easily, by revealing multiple copies of the hard family of Theorem 2.1 and letting x ik denote the average of its counterparts over all copies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">General Vertex Arrivals</head><p>In this section we present a ( 1 /2 + &#8486;(1))-competitive randomized algorithm for online matching under general arrivals. As discussed in the introduction, our approach will be to round (online) a fractional online matching algorithm's output. Specifically, this will be an algorithm from the family of fractional algorithms introduced in <ref type="bibr">[25]</ref>. In Section 3.1 we describe this family of algorithms. To motivate our rounding approach, in Section 3.2 we first present a simple lossless rounding method for a 1 /2-competitive algorithm in this family. In Section 3.3 we then describe our rounding algorithm for a better-than-1 /2-competitive algorithm in this family. Finally, in Section 3.4 we analyze this rounding scheme, and show that it yields a ( 1 /2 + &#8486;(1))-competitive algorithm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Finding a fractional solution</head><p>In this section we revisit the algorithm of Wang and Wong <ref type="bibr">[25]</ref>, which beats the 1 /2 competitiveness barrier for online fractional matching under general vertex arrivals. Their algorithm (technically, family of algorithms) applies the primal-dual method to compute both a fractional matching and a fractional vertex cover -the dual of the fractional matching relaxation. The LPs defining these dual problems are as follows.</p><p>Primal-Matching maximize e&#8712;E x e subject to:</p><p>Before introducing the algorithm of <ref type="bibr">[25]</ref>, we begin by defining the fractional online vertex cover problem for vertex arrivals. When a vertex v arrives, if N v (v) denotes the previously-arrived neighbors of v, then for each u &#8712; N v (v), a new constraint y u + y v 1 is revealed, which an online algorithm should satisfy by possibly increasing y u or y v . Suppose v has its dual value set to y v = 1&#952;. Then all of its neighbors should have their dual increased to at least &#952;. Indeed, an algorithm may as well increase y u to max{y u , &#952;}. The choice of &#952; therefore determines an online fractional vertex cover algorithm. The increase of potential due to the newly-arrived vertex v is thus 1&#952; + u&#8712;Nv(v) (&#952;y u ) + . <ref type="foot">2</ref> In <ref type="bibr">[25]</ref> &#952; is chosen to upper bound this term by 1&#952; + f (&#952;) for some function f (&#8226;). The primal solution (fractional matching) assigns values x uv so as to guarantee feasibility of x and a ratio of &#946; between the primal and dual values of x and y, implying 1 &#946; -competitiveness of this online fractional matching algorithm, by feasibility of y and weak duality. The algorithm, parameterized by a function f (&#8226;) and parameter &#946; to be discussed below, is given formally in Algorithm 1. In the subsequent discussion, N v (u) denotes the set of neighbors of u that arrive before v.</p><p>Algorithm 1 is parameterized by a function f and a constant &#946;. The family of functions considered by <ref type="bibr">[25]</ref> are as follows.</p><p>As we will see, choices of &#946; guaranteeing feasibility of x are related to the following quantity.</p><p>Output: A fractional vertex cover solution y and a fractional matching x. 1 Let y u &#8592; 0 for all u, let x uv &#8592; 0 for all u, v. 2 foreach v in the stream do</p><p>Algorithm 1: Online general vertex arrival fractional matching and vertex cover</p><p>For functions f &#8712; W this definition of &#946; * (f ) can be simplified to &#946; * (f ) = 1 + f (0), due to the observation (see <ref type="bibr">[25,</ref><ref type="bibr">Lemmas 4,</ref><ref type="bibr">5]</ref>) that all functions f &#8712; W satisfy</p><p>As mentioned above, the competitiveness of Algorithm 1 for appropriate choices of f and &#946; is obtained by relating the overall primal and dual values, e x e and v y v . As we show (and rely on later), one can even bound individual vertices' contributions to these sums. In particular, for any vertex v's arrival time, each vertex u's contribution to e x e , which we refer to as its fractional degree, x u := w&#8712;Nv(u) x uw , can be bounded in terms of its dual value by this point, y u , as follows.</p><p>Lemma 3.3. For any vertex u, v &#8712; V , let y u be the potential of u prior to arrival of v. Then the fractional degree just before v arrives, x u := w&#8712;Nv(u) x uw , is bounded as follows:</p><p>Broadly, the lower bound on x u is obtained by lower bounding the increase x u by the increase to y u /&#946; after each vertex arrival, while the upper bound follows from a simplification of a bound given in [25, Invariant 1] (implying feasibility of the primal solution), which we simplify using <ref type="bibr">(5)</ref>. See Appendix B for a full proof.</p><p>Another observation we will need regarding the functions f &#8712; W is that they are decreasing.</p><p>Proof. As observed in <ref type="bibr">[25]</ref>, differentiating <ref type="bibr">(5)</ref> with respect to z yields</p><p>The next lemma of <ref type="bibr">[25]</ref> characterizes the achievable competitiveness of Algorithm 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Lemma 3.5 ([25]</head><p>). Algorithm 1 with function f &#8712; W and <ref type="bibr">[25]</ref> showed that taking &#954; &#8776; 1.1997 and &#946; = &#946; * (f &#954; ), Algorithm 1 is &#8776; 0.526 competitive. In later sections we show how to round the output of Algorithm 1 with f &#954; with &#954; = 1 + 2&#949; for some small constant &#949; and &#946; = 2&#949; to obtain a ( 1 /2 + &#8486;(1))-competitive algorithm. But first, as a warm up, we show how to round this algorithm with &#954; = 1 and &#946; = &#946; * (f 1 ) = 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Wang and Wong</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Warmup: a 1 /2-competitive randomized algorithm</head><p>In this section we will round the 1 /2-competitive fractional algorithm obtained by running Algorithm 1 with function f (&#952;) = f 1 (&#952;) = 1&#952; and &#946; = &#946; * (f ) = 2. We will devise a lossless rounding of this fractional matching algorithm, by including each edge e in the final matching with a probability equal to the fractional value x e assigned to it by Algorithm 1. Note that if v arrives after u, then if F u denotes the event that u is free when v arrives, then edge {u, v} is matched by an online algorithm with probability Pr[{u,</p><p>That is, we must match {u, v} with probability z u = x uv / Pr[F u ] conditioned on u being free. The simplest way of doing so (if possible) is to pick an edge {u, v} with the above probability z u always, and to match it only if u is free. Algorithm 2 below does just this, achieving a lossless rounding of this fractional algorithm. As before, N v (u) denotes the set of neighbors of u that arrive before v.</p><p>Update y u 's and x uv 's using Algorithm 1 with &#946; = 2 and f = f 1 .</p><p>Algorithm 2: Online vertex arrival warmup randomized fractional matching Algorithm 2 is well defined if for each vertex v's arrival, z is a probability distribution; i.e., u&#8712;Nv(v) z u 1. The following lemma asserts precisely that. Moreover, it asserts that Algorithm 2 matches each edge with the desired probability. Lemma 3.6. Algorithm 2 is well defined, since for every vertex v on arrival, z is a valid probability distribution. Moreover, for each v and u &#8712; N v (v), it matches edge {u, v} with probability x e .</p><p>Proof. We prove both claims in tandem for each v, by induction on the number of arrivals. For the base case (v is the first arrival), the set N v (v) is empty and thus both claims are trivial. Consider the arrival of a later vertex v. By the inductive hypothesis we have that each vertex u &#8712; N v (v) is previously matched with probability w&#8712;Nv(u) x wu . But by our choice of f (&#952;) = f 1 (&#952;) = 1&#952; and &#946; = 2, if w arrives after u, then y u and &#952; at arrival of w satisfy</p><p>That is, x uw is precisely the increase in y u following arrival of w. On the other hand, when u arrived we have that its dual value y u increased by 1</p><p>To see this last step, we recall first that by definition of Algorithm 1 and our choice of f (&#952;) = 1&#952;, the value &#952; on arrival of v is chosen to be the largest &#952; 1 satisfying</p><p>But the inequality ( <ref type="formula">6</ref>) is an equality whether or not &#952; = 1 (if &#952; = 1, both sides are zero). We conclude that y u = v &#8242; &#8712;Nv(u) x uv &#8242; just prior to arrival of v. But then, by the inductive hypothesis, this implies that Pr[u free when v arrives] = 1y u (yielding an easily-computable formula for z u ). Consequently, by <ref type="bibr">(6)</ref> we have that when v arrives z is a probability distribution, as</p><p>Finally, for u to be matched to a latter-arriving neighbor v, it must be picked and free when v arrives, and so {u, v} is indeed matched with probability Pr[{u, v} &#8712; M ] =</p><p>x uv Pr[u is free when v arrives]</p><p>&#8226; Pr[u is free when v arrives] = x uv .</p><p>In the next section we present an algorithm which allows to round better-than-1 /2-competitive algorithms derived from Algorithm 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">An improved algorithm</head><p>In this section, we build on Algorithm 2 and show how to improve it to get a (1/2+&#8486;(1)) competitive ratio.</p><p>There are two concerns when modifying Algorithm 2 to work for a general function from the family W . The first is how to compute the probability that a vertex u is free when vertex v arrives, in Line 6. In the simpler version, we inductively showed that this probability is simply 1-y u , where y u is the dual value of u as of v's arrival (see the proof of Lemma 3.6). With a general function f , this probability is no longer given by a simple formula. Nevertheless, it is easily fixable: We can either use Monte Carlo sampling to estimate the probability of u being free at v's arrival to a given inverse polynomial accuracy, or we can in fact exactly compute these probabilities by maintaining their marginal values as the algorithm progresses. In what follows, we therefore assume that our algorithm can compute these probabilities exactly.</p><p>The second and more important issue is with the sampling step in Line 7. In the simpler algorithm, this step is well-defined as the sampling probabilities indeed form a valid distribution: I.e., u&#8712;Nv(v) z u 1 for all vertices v. However, with a general function f , this sum can exceed one, rendering the sampling step in Line 7 impossible. Intuitively, we can normalize the probabilities to make it a proper distribution, but by doing so, we end up losing some amount from the approximation guarantee. We hope to recover this loss using a second sampling step, as we mentioned in Section 1.2 and elaborate below.</p><p>Suppose that, instead of &#946; = 2 and f = f 1 (i.e., the function f (&#952;) = 1&#952;), we use f = f 1+2&#949; and &#946; = 2&#949; to define x uv and y u values. As we show later in this section, for an &#949; sufficiently small, we then have u&#8712;Nv(v) z u 1 + O(&#949;), implying that the normalization factor is at most 1 + O(&#949;). However, since the approximation factor of the fractional solution is only 1/2 + O(&#949;) for such a solution, (i.e., {u,v}&#8712;E x uv (1/&#946;) &#8226; u&#8712;V y u ), the loss due to normalization is too significant to ignore. Now suppose that we allow arriving vertices to sample a second edge with a small (i.e., &#8730; &#949;)</p><p>probability and match that second edge if the endpoint of the first sampled edge is already matched. Consider the arrival of a fixed vertex v such that u&#8712;Nv(v) z u &gt; 1, and let z &#8242; u denote the normalized z u values. Further let F w denote the event that vertex w is free (i.e, unmatched) at the arrival of v. Then the probability that v matches u for some u &#8712; N v (v) using either of the two sampled edges is</p><p>which is the same expression from (1) from Section 1.2, restated here for quick reference. Recall that the first term inside the parentheses accounts for the probability that v matches u via the first sampled edges, and the second term accounts for the probability that the same happens via the second sampled edge. Note that the second sampled edge is used only when the first one is incident to an already matched vertex and the other endpoint of the second edge is free. Hence we have the summation of conditional probabilities in the second term, where the events are conditioned on the other endpoint, u, being free. If the probability given in ( <ref type="formula">7</ref>) is x uv for all {u, v} &#8712; E, we would have the same guarantee as the fractional solution x uv , and the rounding would be lossless. This seems unlikely, yet we can show that the quantity in ( <ref type="formula">7</ref>) is at least (1&#949; 2 ) &#8226; x uv for most (not by number, but by the total fractional value of x uv 's) of the edges in the graph, showing that our rounding is almost lossless. We postpone further discussion of the analysis to Section 3.4 where we highlight the main ideas before proceeding with the formal proof.</p><p>Update y u 's and x uv 's using Algorithm 1 with &#946; = 2&#949; and f = f 1+2&#949; . Our improved algorithm is outlined in Algorithm 3. Up until Line 6, it is similar to Algorithm 2 except that it uses &#946; = 2&#949; and f = f 1+2&#949; where we choose &#949; &gt; 0 to be any constant small enough such that the results in the analysis hold. In Line 8, if the sum of z u 's exceeds one we normalize the z u to obtain a valid probability distribution z &#8242; u . In Line 9, we sample the first edge incident to an arriving vertex v. In Line 11, we sample a second edge incident to the same vertex with probability &#8730; &#949; if we had to scale down z u 's in Line 8. Then in Line 12, we drop the sampled second edge with the minimal probability to ensure that no edge {u, v} is matched with probability more than x uv . Since <ref type="bibr">(7)</ref> gives the exact probability of {u, v} being matched, this probability of dropping an edge {u, v} can be computed by the algorithm. However, to compute this, we need the conditional probabilities Pr[F w | F u ], which again can be estimated using Monte Carlo sampling <ref type="foot">3</ref> .</p><p>In the subsequent lines, we match v to a chosen free neighbor (if any) among its chosen neighbors, prioritizing its first choice.</p><p>For the purpose of analysis we view Algorithm 3 as constructing a greedy matching on a directed acyclic graph (DAG) H &#964; defined in the following two definitions. Definition 3.7 (Non-adaptive selection graph G &#964; ). Let &#964; denote the random choices made by the vertices of G. Let G &#964; be the DAG defined by all the arcs (v, u 1 ), (v, u 2 ) for all vertices v &#8712; V . We call the arcs (v, u 1 ) primary arcs, and the arcs (v, u 2 ) the secondary arcs. Definition 3.8 (Pruned selection graph H &#964; ). Now construct H &#964; from G &#964; by removing all arcs (v, u) (primary or secondary) such that there exists a primary arc (v &#8242; , u) with v &#8242; arriving before v. We further remove a secondary arc (v, u) if there is a primary arc (v, u); i.e., if a vertex u has at least one incoming primary arc, remove all incoming primary arcs that came after the first primary arc and all secondary arcs that came after or from the same vertex as the first primary arc.</p><p>It is easy to see that the matching constructed by Algorithm 3 is a greedy matching constructed on H &#964; based on order of arrival and prioritizing primary arcs. The following lemma shows that the set of matched vertices obtained by this greedy matching does not change much for any change in the random choices of a single vertex v, which will prove useful later on. It can be proven rather directly by an inductive argument showing the size of the symmetric difference in matched vertices in G &#964; and G &#964; &#8242; does not increase after each arrival besides the arrival of v, whose arrival clearly increases this symmetric difference by at most two. See Appendix A for details. Lemma 3.9. Let G &#964; and G &#964; &#8242; be two realizations of the random digraph where all the vertices in the two graphs make the same choices except for one vertex v. Then the number of vertices that have different matched status (free/matched) in the matchings computed in H &#964; and H &#964; &#8242; at any point of time is at most two.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Analysis</head><p>In this section, we analyze the competitive ratio of Algorithm 3. We start with an outline of the analysis where we highlight the main ideas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.1">High-Level Description of Analysis</head><p>As described in Section 3.3, the main difference compared to the simpler 1 /2-competitive algorithm is the change of the construction of the fractional solution, which in turn makes the rounding more complex. In particular, we may have at the arrival of a vertex v that u&#8712;Nv(v) z u &gt; 1. The majority of the analysis is therefore devoted to such "problematic" vertices since otherwise, if u&#8712;Nv(v) z u 1, the rounding is lossless due to the same reasons as described in the simpler setting of Section 3.2. We now outline the main ideas in analyzing a vertex v with u&#8712;Nv(v) z u &gt; 1. Let F w be the event that vertex w is free (i.e., unmatched) at the arrival of v. Then, as described in Section 3.3, the probability that we select edge {u, v} in our matching is the minimum of x uv (because of the pruning in Line 12), and</p><p>By definition, Pr[F u ] &#8226; z u = x uv , and the expression inside the parentheses is at least</p><p>To analyze this inequality, we first use the structure of the selected function f = f 1+2&#949; and the selection of &#946; = 2&#949; to show that if u&#8712;Nv(v) z u &gt; 1 then several structural properties hold (see Lemma 3.10 and Corollary 3.11 in Section 3.4.2). In particular, there are absolute constants 0 &lt; c &lt; 1 and C &gt; 1 (both independent of &#949;) such that 1.</p><p>u&#8712;Nv(v) z u 1 + C&#949;;</p><p>The first property implies that the right-hand-side of ( <ref type="formula">8</ref>) is at most 1+C&#949;; and the second property implies that v has at least &#8486;(1/ &#8730; &#949;) neighbors and that each neighbor</p><p>For simplicity of notation, we assume further in the high-level overview that v has exactly 1/ &#8730; &#949; neighbors and each u &#8712; N v (v) satisfies z &#8242; u = &#8730; &#949;. Inequality (8) would then be implied by</p><p>To get an intuition why we would expect the above inequality to hold, it is instructive to consider the unconditional version:</p><p>where the first inequality is from the fact that Pr[F w ] 1c for any neighbor w &#8712; N v (v). The large slack in the last inequality, obtained by selecting &#949; &gt; 0 to be a sufficiently small constant, is used to bound the impact of conditioning on the event F u . Indeed, due to the large slack, we have that ( <ref type="formula">9</ref>) is satisfied if the quantity w&#8712;Nv(v) Pr[F w |F u ] is not too far away from the same summation with unconditional probabilities, i.e., w&#8712;Nv(v) Pr[F w ]. Specifically, it is sufficient to show</p><p>We do so by bounding the correlation between the events F u and F w in a highly non-trivial manner, which constitutes the heart of our analysis. The main challenges are that events F u and F w can be positively correlated and that, by conditioning on F u , the primary and secondary choices of different vertices are no longer independent. We overcome the last difficulty by replacing the conditioning on F u by a conditioning on the component in H &#964; (at the time of v's arrival) that includes u. As explained in Section 3.3, the matching output by our algorithm is equivalent to the greedy matching constructed in H &#964; and so the component containing u (at the time of v's arrival) determines F u . But how can this component look like, assuming the event F u ? First, u cannot have any incoming primary arc since then u would be matched (and so the event F u would be false). However, u could have incoming secondary arcs, assuming that the tails of those arcs are matched using their primary arcs. Furthermore, u can have an outgoing primary and possibly a secondary arc if the selected neighbors are already matched. These neighbors can in turn have incoming secondary arcs, at most one incoming primary arc (due to the pruning in the definition of H &#964; ), and outgoing primary and secondary arcs; and so on. In Figure <ref type="figure">2</ref>, we give two examples of the possible structure, when conditioning on F u , of u's component in H &#964; (at the time of v's arrival). The left example contains secondary arcs, whereas the component on the right is arguably simpler and only contains primary arcs.</p><p>An important step in our proof is to prove that, for most vertices u, the component is of the simple form depicted to the right with probability almost one. That is, it is a path P consisting of primary arcs, referred to as a primary path (see Definition 3.13) that further satisfies: (i) it has length O(ln(1/&#949;)); and (ii) the total z-value of the arcs in the blocking set of P is O(ln(1/&#949;)). The blocking set is defined in Definition 3.14. Informally, it contains those arcs that if appearing as primary arcs in G &#964; would cause arcs of P to be pruned (or blocked) from H &#964; .</p><p>Let P be the primary paths of above type that appear with positive probability as u's component in H &#964; . Further let EQ P be the event that u's component equals P . Then we show (for most vertices) that P &#8712;P Pr[EQ P | F u ] is almost one. For simplicity, let us assume here that the sum is equal to one. Then by the law of total probability and since P &#8712;P Pr[EQ P | F u ] = 1,</p><p>where the last equality is because the component P determines F u . The proof is then completed by analyzing the term inside the parentheses for each primary path P &#8712; P separately. As we prove in Lemma 3.15, the independence of primary and secondary arc choices of vertices is maintained after conditioning on EQ P . <ref type="foot">4</ref> Furthermore, we show that there is a bijection between the outcomes of the unconditional and the conditional distributions, so that the expected number of vertices that make different choices under this pairing can be upper bounded by roughly the length of the path plus the z-value of the edges in the blocking set. So, for a path P as above, we have that the expected number of vertices that make different choices in the paired outcomes is O(ln(1/&#949;)) which, by Lemma 3.9, implies that the expected number of vertices that change matched status is also upper bounded by O(ln(1/&#949;)). In other words, we have for every P &#8712; P that</p><p>which implies <ref type="bibr">(10)</ref> for a small enough choice of &#949;. This completes the overview of the main steps in the analysis. The main difference in the formal proof is that not all vertices satisfy that their component is a short primary path with probability close to 1. To that end, we define the notion of good vertices in Section 3.4.4, which are the vertices that are very unlikely to have long directed paths of primary arcs rooted at them. These are exactly the vertices v for which we can perform the above analysis for most neighbors u (in the proof of the "key lemma") implying that the rounding is almost lossless for v. Then, in Section 3.4.6, we show using a rather simple charging scheme that most of the vertices in the graph are good. Finally, in Section 3.4.7, we put everything together and prove Theorem 1.2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.2">Useful Properties of W Functions and Algorithm 3</head><p>For the choice of f = f 1+2&#949; as we choose, we have</p><p>we give a more manageable upper bound for f (&#952;) which holds for sufficiently small &#949;. Based on this simple upper bound on f and some basic calculus, we obtain the following useful structural properties for the conditional probabilities, z u , of Algorithm 3. See Appendix C.</p><p>Lemma 3.10. (Basic bounds on conditional probabilities z u ) There exist absolute constants c &#8712; (0, 1) and C &gt; 1/c &gt; 1 and &#949; 0 &#8712; (0, 1) such that for every &#949; &#8712; (0, &#949; 0 ) the following holds: for every vertex v &#8712; V , if y u is the dual variable of a neighbor u &#8712; N v (v) before v's arrival and &#952; is the value chosen by Algorithm 1 on v's arrival, then for z u as defined in Algorithm 3, we have:</p><p>The following corollary will be critical to our analysis:</p><p>Corollary 3.11. There exist absolute constants c &gt; 0 and &#949; 0 &gt; 0 such that for all &#949; &#8712; (0, &#949;), on arrival of any vertex v &#8712; V , if z as defined in Algorithm 3 satisfies u&#8712;Nv(v) z u &gt; 1, then for every u &#8712; N v (v) we have c Pr[u is free when v arrives] 1c. Definition 3.13 (Certified Primary Path). A tuple (P, T ) is a certified primary path in H &#964; if P is a directed path of primary arcs in H &#964; and either (a) the last vertex of P does not have an outgoing primary arc in G &#964; and T = &#8709;, or (b) the last vertex u of P has an outgoing primary arc (u, w) in G &#964; and T = (u &#8242; , w) is a primary arc in H &#964; such that u &#8242; precedes u in the arrival order.</p><p>To elaborate, a certified primary path (P, T ) is made of a (directed) path P of primary arcs in H &#964; and T is a certificate of P 's termination in H &#964; that ensures the last vertex u in P has no outgoing primary arc in H &#964; , either due to u not picking a primary arc with T = &#8709;, or due to the picked primary arc (u, w) being blocked by another primary arc T = (u &#8242; , w) which appears in H &#964; .</p><p>As described, G &#964; and H &#964; differ in arcs (u, w) that are blocked by previous primary arcs to their target vertex w. We generally define sets of arcs which can block an edge, or a path, or a certified path from appearing in H &#964; as in the following definition: Definition 3.14 (Blocking sets). For an arc (u, w), define its blocking set B(u, w) := {(u &#8242; , w) | {u &#8242; , w} is an edge and u &#8242; arrived before u} to be those arcs, the appearance of any of which as primary arc in G &#964; blocks (u, v) from being in H &#964; . In other words, an arc (u, v) is in H &#964; as primary or secondary arc if and only if (u, v) is in G &#964; and none of the arcs in its blocking set B(u, v) is in G &#964; as a primary arc.</p><p>The blocking set of a path P is simply the union of its arcs' blocking sets,</p><p>The blocking set of a certified primary path (P, T ) is the union of blocking sets of P and T , B(P, T ) := B(P &#8746; T ).</p><p>The probability of an edge, or path, or certified primary path appearing in H &#964; is governed in part by the probability of arcs in their blocking sets appearing as primary arcs in G &#964; . As an arc (v, u) is picked as primary arc by when v arrives with probability roughly z u (more precisely, z &#8242; u &#8712; [z vu /(1 + C&#949;), z vu ], by Lemma 3.10), it will be convenient to denote by z(v, u) and z &#8242; (v, u) the values z u and z &#8242; u when v arrives, and by z(S) = s&#8712;S z(s) and z &#8242; (S) = s&#8712;S z(s) the sum of zand z &#8242; -values of arcs in a set of arcs S.</p><p>Product distributions. Note that by definition the distribution over primary and secondary arc choices of vertices are product distributions (they are independent). As such, their joint distribution is defined by their marginals. Let p w and s w denote the distribution on primary and secondary arc choices of w, respectively. That is, for every u &#8712; N w (w), p w (u) is the marginal probability that w selects (w, u) as its primary arc, and s w (u) is the marginal probability that w selects (w, u) as its secondary arc. Given our target bound <ref type="bibr">(8)</ref>, it would be useful to show that conditioning on F u preserves the independence of these arc choices. Unfortunately, conditioning on F u does not preserve this independence. We will therefore refine our conditioning later on the existence of primary paths in H &#964; , which as we show below maintains independence of the arc choices. Lemma 3.15. For a certified primary path (P, T ) let EQ (P,T ) be the event that the path P equals a maximal connected component in H &#964; and the termination of P is certified by T . Then the conditional distributions of primary and secondary choices conditioned on EQ (P,T ) are product distributions; i.e., these conditional choices are independent. Moreover, if we let pw and sw denote the conditional distribution on primary and secondary choices of w, respectively, then TV(p w , pw ) z(R(w)) and TV(s w , sw ) z(R(w)),</p><p>where R(w) &#8838; {w} &#215; N w (w) is the set of arcs leaving w whose existence as primary arcs in G &#964; is ruled out by conditioning on EQ (P,T ) , and the union of these R(w), denoted by R(P, T ), satisfies</p><p>{w} &#215; N w (w). ( <ref type="formula">14</ref>)</p><p>Proof. We first bound the total variation distance between the conditional and unconditional distributions. For primary choices, conditioning on EQ (P,T ) rules out the following sets of primary arc choices. For vertex w / &#8712; P arriving before the root r of P this conditioning rules out w picking any edge in B(P, T ) as primary arc. For vertices w / &#8712; P with w arriving after the root r of P this conditioning rules out picking arcs (w, r). Finally, this conditioning rules out some subset of arcs leaving vertices in P &#8746; {w : T = (w, w &#8242; )}. Taking the union over these supersets of R(w), we obtain <ref type="bibr">(14)</ref>. Now, the probability of each ruled out primary choice (w, u) &#8712; R(w) is zero under pw and z &#8242; (w, u) under p w , and all other primary choices have their probability increase, with a total increase of (w,u)&#8712;R(w) z &#8242; (w, u), from which we conclude that</p><p>The proof for secondary arcs is nearly identical, the only differences being that the sets of ruled out secondary arcs can be smaller (specifically, secondary arcs to w &#8242; such that T = (u, w &#8242; ) are not ruled out by this conditioning), and the probability of any arc (w, u) being picked as secondary arc of w is at most</p><p>Finally, we note that primary and secondary choices for different vertices are independent. Therefore, conditioning on each vertex w not picking a primary arc in its ruled out set R(w) still yields a product distribution, and similarly for the distributions over secondary choices.</p><p>It is easy to show that a particular certified primary path (P, T ) with high value of z(B(P, T )) is unlikely to appear in H &#964; , due to the high likelihood of arcs in its breaking set being picked as primary arcs. The following lemma asserts that the probability of a vertex u being the root of any primary certified path (P, T ) with high z(B(P, T )) value is low. Lemma 3.16. For any k 0 and any vertex u, we have the following Pr[H &#964; contains any certified primary path (P, T ) with P rooted at u and z(B(P, T )) k] e -k/2 , Pr[H &#964; contains any primary path P rooted at u with z(B(P</p><p>Proof. We first prove the bound for certified primary paths. For a certified primary path (P, T ) where the last vertex of P is w, define P * as follows:</p><p>Observe that z(B(P * )) k whenever z(B(P, T )) k. This is trivial when T = &#8709;. To see this for the case T = (w &#8242; , w &#8242;&#8242; ), let w be the last vertex of P , and note that B(w &#8242; , w &#8242;&#8242; ) &#8838; B(w, w &#8242;&#8242; ), as w arrives after w &#8242; . Also note that for (P, T ) to be in H &#964; , we have that P * must be in G &#964; .</p><p>We say a directed primary path</p><p>For such a path P &#8242; , define B * (P &#8242; ) as follows: Initially set B * (P &#8242; ) = B(P \ {(u &#8467;-1 , u &#8467; )}). Then from B(u &#8467;-1 , u &#8467; ), the breaking set of the last arc of P &#8242; , add arcs to B * (P &#8242; ) in reverse order of their sources' arrival until z(B * (P &#8242; )) k.</p><p>Consider a certified primary path (P, T ) with P rooted at u. If a k-minimal path rooted at u which is not a prefix of P * is contained in G &#964; , then (P, T ) does not appear in G &#964; , and therefore it does not appear in H &#964; . On the other hand, if z(B(P, T ))</p><p>k then for (P, T ) to appear in H &#964; , we must have that the (unique) k-minimal prefix P &#8242; of P * must appear in G &#964; , and that none of the edges of B * (P &#8242; ) appear in G &#964; . Moreover, for any certified primary path with z(B(P, T )), conditioning on the existence of P &#8242; in G &#964; does not affect random choices of vertices with outgoing arcs in B * (P &#8242; ), as these vertices are not in P &#8242; . Since by Lemma 3.10 each arc (w, w &#8242; ) appears in G &#964; with probability z &#8242; (v, u) z(v, u)/(1 + C&#949;) z(v, u)/2, we conclude that for any k-minimal primary path P &#8242; rooted at u, we have Pr[H &#964; contains any certified primary path (P, T ) with z(B(P, T ))</p><p>Taking total probability P u , the set of all k-minimal primary paths P &#8242; rooted at u, we get that indeed, since u is the root of at most one k-minimal primary path in any realization of G &#964; , Pr[H &#964; contains a certified primary path (P, T ) rooted at u with z(B(P, T )) k]</p><p>Pr[H &#964; contains a (P, T ) with z(B(P, T ))</p><p>The proof for primary path is essentially the same as the above, taking P * = P .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.4">Analyzing Good Vertices</head><p>Consider the set of vertices that are unlikely to be roots of long directed paths of primary arcs in H &#964; . In this section, we show that Algorithm 3 achieves almost lossless rounding for such vertices, and hence we call them good vertices. We start with a formal definition: Definition 3.17 (Good vertices). We say that a vertex v is good if Pr &#964; [H &#964; has a primary path rooted at v of length at least 2000 &#8226; ln(1/&#949;)] &#949; 6 .</p><p>Otherwise, we say v is bad.</p><p>As the main result of this section, for good vertices, we prove the following: Theorem 3.18. Let v be a good vertex. Then</p><p>x uv .</p><p>Notational conventions. Throughout this section, we fix v and let z, z &#8242; be as in Algorithm 3. Moreover, for simplicity of notation, we suppose that the stream of vertices ends just before v's arrival and so quantities, such as G &#964; and H &#964; , refer to their values when v arrives. For a vertex u, we let F u denote the event that u is free (i.e., unmatched) when v arrives. In other words, F u is the event that u is free in the stream that ends just before v's arrival.</p><p>To prove the theorem, first note that it is immediate if u&#8712;Nv(v) z u 1: in that case, we have z &#8242; = z and so the probability to match v by a primary edge, by definition of z u , is simply</p><p>x uv .</p><p>From now on we therefore assume u&#8712;Nv(v) z u &gt; 1, which implies (I) u&#8712;Nv(v) z &#8242; u = 1, and moreover, by Lemma 3.10 and Corollary 3.11, for every u &#8712; N v (v):</p><p>where c is the constant of Corollary 3.11 and C is the constant of Lemma 3.10. We now state the key technical lemma in the proof of Theorem 3.18:</p><p>[H &#964; has a primary path rooted at u of length at least 2000</p><p>Then,</p><p>Note that the above lemma bounds the quantity w&#8712;Nv(v) z &#8242; w &#8226; Pr[F w | F u ], which will allow us to show that (8) holds and thus the edge {u, v} is picked in the matching with probability very close to x uv . Before giving the proof of the lemma, we give the formal argument why the lemma implies the theorem.</p><p>Proof of Theorem 3.18. Define S to be the neighbors u in N v (v) satisfying Pr &#964; [H &#964; has a primary path rooted at u of length at least 2000</p><p>In other words, S is the set of neighbors of v that violate <ref type="bibr">(15)</ref>. As v is good, we have</p><p>The second inequality holds because v selects the primary arc (u, v) with probability z &#8242; u and, conditioned on F u , u cannot already have an incoming primary arc, which implies that (u, v) is present in H &#964; . The last inequality follows from the choice of S.</p><p>By Property (III), z u (1 + C&#949;) &#8226; z &#8242; u and so by rewriting we get</p><p>In other words, the contribution of the neighbors of v in S to u&#8712;Nv(v) x uv is insignificant compared to the contribution of all neighbors, u&#8712;Nv(v)</p><p>where the inequality follows by the assumption u&#8712;Nv(v) z u 1 and Pr[F u ] c by Property (IV).</p><p>We proceed to analyze a neighbor u &#8712; N v (v) \ S. Recall that it is enough to verify (8) to conclude that edge {u, v} is picked in the matching with probability x uv . We have that</p><p>(by (III))</p><p>Therefore, by definition of S and Lemma 3.19, we thus have that for every u &#8712; N v (v) \ S, the edge {u, v} is taken in the matching with probability x uv . Thus, the probability that v is matched on arrival is, as claimed, at least</p><p>x uv ,</p><p>where the last inequality holds because we have u&#8712;Nv(v) x uv c, as calculated in (17).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.5">Proof of the Key Lemma</head><p>It remains to prove the key lemma, Lemma 3.19, which we do here.</p><p>Proof of Lemma 3.19. For a certified primary path (P, T ) let EQ (P,T ) be the event as defined in Lemma 3.15, and let IN (P,T ) be the event that P is a maximal primary path in H &#964; and the termination of P is certified by T . Further, let C = {(P, T ) : (P, T ) is a certified primary path rooted at u with Pr[IN (P,T ) ] &gt; 0} be the set of certified primary paths rooted at u that have a nonzero probability of being maximal in H &#964; . Then, by the law of total probability and since (P,T )&#8712;C Pr[IN (P,T ) | F u ] = 1 (since conditioning on F u implies in particular that u has no incoming primary arc), we can rewrite the expression to bound,</p><p>as</p><p>We analyze this expression in two steps. First, in the next claim, we show that we can focus on the case when the certified path (P, T ) is very structured and equals the component of u in H &#964; . We then analyze the sum in that structured case.</p><p>Claim 3.20. Let P &#8838; C contain those certified primary paths (P, T ) of C that satisfy: P has length less than 2000 &#8226; ln(1/&#949;) and z(B(P, T )) 2 ln(1/&#949;). Then, we have</p><p>(P,T )&#8712;P</p><p>Proof. Define the following subsets of certified primary paths rooted at u:</p><p>. Since u satisfies (15), we have that</p><p>On the other hand, by Lemma 3.16 and Pr[F u ] c (by Property (IV)), we have that</p><p>In other words, almost all probability mass lies in those outcomes where one of the certified paths (P, T ) &#8712; P is in H &#964; . It remains to prove that, in those cases, we almost always have that the component of u in H &#964; equals the path P (whose termination is certified by T ). Specifically, let EQ (P,T ) denote the complement of EQ (P,T ) . We show Pr EQ (P,T ) | IN (P,T )</p><p>To see this, note that by the definition of the event IN (P,T ) , if we restrict ourselves to primary edges then the component of u in H &#964; equals P . We thus have that for the event EQ (P,T ) to be true at least one of the vertices in P must have an incoming or outgoing secondary edge. Hence the expression Pr EQ (P,T ) | IN (P,T ) can be upper bounded by Pr[a vertex in P has an incoming or outgoing secondary arc in</p><p>Note that event IN (P,T ) is determined solely by choices of primary arcs. By independence of these choices and choices of secondary arcs, conditioning on IN (P,T ) does not affect the distribution of secondary arcs. So the probability that any of the nodes in P selects a secondary edge is at most &#8730; &#949;. Thus, by union bound, the probability that any of the |P | 2000 &#8226; ln(1/&#949;) vertices in P pick a secondary arc is at most &#8730; &#949; &#8226; 2000 &#8226; ln(1/&#949;). We now turn our attention to incoming secondary arcs. First, considering the secondary arcs that go into u, we have</p><p>because any arc (w, u) &#8712; B(v, u) appears as a primary arc in G &#964; independently with probability at least z(w, u)/2 and the appearance of such an arc implies that u has an incoming primary arc in H &#964; and is therefore matched; i.e., the event F u is false in this case. We thus have z(B(v, u)) 2 ln(1/c). Further, since (P, T ) &#8712; C 2 , we have z(B(P )) z(B(P, T )) 2 ln(1/&#949;). Again using that the conditioning on IN (P,T ) does not affect the distribution of secondary edges, we have that the probability of an incoming secondary arc to any vertex in P is at most</p><p>Thus, by union bound, the probability that any vertex in P has an incoming or outgoing secondary arc conditioned on IN (P,T ) is at most</p><p>for sufficiently small &#949;, which implies ( <ref type="formula">19</ref>) via <ref type="bibr">(20)</ref>. We now show how the above concludes the proof of the claim. We have shown that each one of the two sets C 1 , C 2 contributes at most &#949; 1/3 /6 to (18) (where we use that w&#8712;Nv(v) z &#8242; w = 1). Hence,</p><p>(P,T )&#8712;P</p><p>This intuitively concludes the proof of the claim as <ref type="bibr">(19)</ref> says that Pr[EQ (P,T ) |IN (P,T ) ] is almost 1.</p><p>The formal calculations are as follows. Since the event EQ (P,T ) implies the event IN (P,T ) , we have that </p><p>We use this to rewrite Pr[</p><p>. Specifically, by law of total probability, it can be rewritten as the sum of the expressions <ref type="bibr">(22)</ref> and ( <ref type="formula">23</ref>) below:</p><p>and Pr[EQ (P,T ) &#8743;</p><p>where ( <ref type="formula">23</ref>) can be upper bounded as follows: 1&#949; 1/3 /7</p><p>&#8226; (&#949; 1/3 /7) (by ( <ref type="formula">19</ref>) and ( <ref type="formula">21</ref>))</p><p>Pr[EQ (P,T ) ] &#8226; &#949; 1/3 /6.</p><p>(for &#949; small enough)</p><p>As at most one of the events {EQ (P,T ) } (P,T )&#8712;P is true in any realization of G &#964; , we have that (P,T )&#8712;P Pr[EQ (P,T ) &#8743; IN (P,T ) | F u ] (P,T )&#8712;P Pr[EQ (P,T ) ] &#8226; &#949; 1/3 /6 &#949; 1/3 /6. Thus, again using that w&#8712;Nv(v) z w 1, we have that <ref type="bibr">(18)</ref> (P,T )&#8712;P</p><p>The previous claim bounded the contribution of certified primary paths in C \ P to <ref type="bibr">(18)</ref>. The following claim bounds the contribution of paths in P. Claim 3.21. Let P &#8838; C contain those certified primary paths (P, T ) of C that satisfy: P has length less than 2000 &#8226; ln(1/&#949;) and z(B(P, T )) 2 ln(1/&#949;). Then, we have</p><p>Proof. We prove the claim in two steps: first we construct a chain of distributions that interpolates between the unconditional distribution of H &#964; and its conditional distribution, and then bound the expected number of vertices that change their matched status along that chain. For the remainder of the proof we fix the certified primary path (P, T ).</p><p>Constructing a chain of distributions. Let H (0) &#964; denote the unconditional distribution of H &#964; when v arrives, and let H (n) &#964; denote the distribution of H &#964; conditioned on EQ (P,T ) when v arrives.</p><p>Here n = |V | is the number of vertices in the input graph. For every w &#8712; V let F (0) w denote the indicator of w being free when v arrives (unconditionally) and let F (n) w denote the indicator variables of w being free when v arrives conditioned on EQ (P,T ) . Note that F (0) is determined by</p><p>&#964; . For t = 0, . . . , n, we define distributions H As in Lemma 3.15, for every w &#8712; V we denote the unconditional distribution of its primary choice by p w , and the unconditional distribution of its secondary choice by s w . Similarly, we denote the conditional distribution given EQ (P,T ) of the primary choice by p w and the conditional distribution of the secondary choice by s w . For every t = 0, . . . , n the primary choice of vertices w j , j = 1, . . . , t are sampled independently from p w j , and the primary choices of vertices w j , j = t+1, . . . , n are sampled independently from the unconditional distribution p wt . Similarly, secondary choices of vertices w j , j = 1, . . . , t are sampled independently from s w j and secondary choices of vertices w j , j = t + 1, . . . , n are sampled independently from s w j . Note that H (0) &#964; is sampled from the unconditional distribution of H &#964; , and H (n) &#964; is sampled from the conditional distribution (conditioned on EQ (P,T ) ), as required, due to the independence of the conditional probabilities pw j and sw j , by Lemma 3.15. For t = 0, . . . , n let M t denote the matching constructed by our algorithm on H (t) &#964; , and let F (t) w be the indicator variable for w being free when v arrives in the DAG sampled from</p><p>Coupling the distributions of H (t) &#964; . We now exhibit a coupling between the H (t) &#964; , t = 0, . . . , n. Specifically, we will show that for every such t the following holds.</p><p>where R(w t+1 ) is as defined in Lemma 3.15 with regard to the certified primary path R(P, T ).</p><p>Recall that z(R(w t+1 )) is the total probability assigned to arcs leaving w t+1 which are ruled out from being primary arcs in G &#964; by conditioning on EQ (P,T ) . We construct the coupling by induction. The base case corresponds to t = 0 and is trivial. We now give the inductive step (t &#8594; t + 1). We write w := w t+1 to simplify notation. Let Z p &#8712; N w (w) denote the primary choice of w in H (t) &#964; , and let Z s &#8712; N w (w) denote the secondary choice of w in N w (w) (they are sampled according to the unconditional distributions p w and s w respectively). Let Z p &#8712; N w (w) and Z s &#8712; N w (w) be sampled from the conditional distributions p w and s w respectively, such that that the joint distributions (Z p , Z p ) and (Z s , Z s ) satisfy Pr[Z p = Z p ] = TV(p w , p w ) and Pr[Z s = Z s ] = TV(s w , s w ).</p><p>First, we note that if Z p = Z p and Z s = Z s , then w = w t+1 is matched to the same neighbor under H , and so M t = M t+1 , due to the greedy nature of the matching constructed. Otherwise, by Lemma 3.9, at most two vertices have different matched status in M t and M t+1 in the latter case (in the former case every vertex has the same matched status). To summarize, we have, for R(w) determined by (P, T ) as in Lemma 3.15, that</p><p>2(TV(p w , p w ) + TV(s w , s w )) (by <ref type="bibr">(25)</ref> and union bound) 4z(R(w)).</p><p>(by Lemma 3.15)</p><p>This concludes the proof of the inductive step, and establishes <ref type="bibr">(24)</ref>. In particular, we get</p><p>by the definition of R(P, T ) = w R(w) in Lemma 3.15.</p><p>We now finish the claim. First note that for any (P, T ) such that P has length at most 2000 &#8226; ln(1/&#949;) and z(B(P, T ))</p><p>2 ln(1/&#949;) one has w z(R(w)) = z(R(P, T )) = O(ln(1/&#949;)). Indeed, by Lemma 3.15 and linearity of z, recalling that u is the root of P and that no vertex appears after v (and thus B(v, u) = {(w, u) | w arrives between u and v}), we have</p><p>We now bound the contribution to the above upper bound on w z(R(w)) = z(R(P, T )) in (28). First, we have that z(B(P, T )) 2 ln(1/&#949;) by assumption of the lemma. To bound the contribution of z(B(v, u)), we note that by Property IV, we have</p><p>because any arc e = (w, u) appears as a primary arc in G &#964; with probability z &#8242; (w, u) z(w, u)/2, independently of other such arcs, and the appearance of any such an edge implies that u has an incoming primary edge in H &#964; when v arrives and is therefore matched; i.e., the event F u is false in this case. We thus have z(B(v, u)) 2 ln(1/c). Finally, it remains to note that for every one of the at most 2000 &#8226; ln(1/&#949;) + 1 vertices w &#8712; P &#8746; {w : T = (w, w &#8242; )} the contribution of z({w} &#215; N w (w)) to the right hand side of (28) is at most 1 + C&#949; 2, by Lemma 3.10, (2). Putting these bounds together, we get that for sufficiently small &#949;,</p><p>The term we wish to upper bound is at most</p><p>Pr[F w | EQ (P,T ) ] -Pr[F w ] (by Lemma 3.10, (3))</p><p>then, using ( <ref type="formula">27</ref>) and (29), we find that the term we wish to upper bound is at most</p><p>completing the proof.</p><p>Finally, we obtain Lemma 3.19 by combining Claim 3.20 and Claim 3.21, to find that, as claimed</p><p>(P,T )&#8712;P</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Pr[EQ</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.6">Bounding the Impact of Bad Vertices</head><p>In this section, we show that we can completely ignore the bad vertices without losing too much.</p><p>From the definition of good vertices, for a bad vertex v, we have that Pr &#964; [H &#964; has a primary path rooted at v of length at least 2000 &#8226; ln(1/&#949;)] &#949; 6 .</p><p>As the main result of this section, we prove the following theorem:</p><p>Theorem 3.22. The number of bad vertices is at most &#949; 3 &#8226; e&#8712;E x e .</p><p>To prove this, we first describe a charging mechanism in which, for each bad vertex, a charge of one is distributed among a subset of other vertices. Then, using the following supplementary lemma, we show that the total distributed charge over all vertices in the graph is at most &#949; 3 &#8226; (u,v)&#8712;E x uv . Lemma 3.23. We call a primary path P a primary predecessor path of v if it ends at v. That is, Proof. We use the principle of deferred decisions and traverse the path backwards. Let b be the current vertex, which is initially set to v. Consider all incoming arcs to b, say (a 1 , b), . . . , (a k , b) where we index a's by time of arrival; i.e., a i arrives before a j if i &lt; j (and b arrived before any a i ).</p><p>First consider the random choice of a 1 and see if it selected the arc (a 1 , b).</p><p>&#8226; If it does, then the path including b in H &#964; will use the arc (a 1 , b).</p><p>&#8226; Otherwise, if a 1 does not select the arc (a 1 , b), then go on to consider a 2 and so on.</p><p>If no a 1 , . . . , a k selects b, then the process stops; i.e., the primary path starts at this vertex since b has no incoming primary arc. Otherwise let i be the first index so that (a i , b) was selected. Then (a i , b) is in the primary path ending at v in H &#964; . Now, observe that no a 1 , . . . , a i-1 may be in the path in this case, because these vertices arrived before a i and after b. Moreover, we have not revealed any randomness regarding a i+1 , . . . , a k that may appear later in the path. We can therefore repeat the above process with b now set to a i and "fresh" randomness for all vertices we consider, as the random choices of arcs of all vertices are independent. We now show that this process, with good probability, does not result in a long predecessor path P of low z(B(P )) value.</p><p>Recall from Lemma 3.10, (5), that z(u, v) 3/5 for all (u, v) &#8712; V &#215; V . Suppose that k i=1 z(a i , b) 4/5. Let j be the first index such that j i=1 z(a i , b) 1/5. Thus j i=1 z(a i , b) 4/5, and hence the probability that none of the first j vertices select b is at least j i=1 (1-z(a i , b)) 1 -j i=1 z(a i , b) 1/5. Consequently, with probability at least 1/5, vertex b either has no predecessor or the increase to z(B(P )) is at least 1/5.</p><p>In the other case, we have k i=1 z(a i , b) 4/5. Then the probability that b has no predecessor is</p><p>. Therefore, at any step in the above random process, with probability at least 1/5, we either stop or increase z(B(P )) by 1/5. Let Z i be an indicator variable for the random process either stopping or increasing z(B(P )) by at least 1/5 at step i, and notice that according to the above random process, each Z i is lower bounded by an independent Bernoulli variable with probability 1/5. Thus if we define Z = i&#8712;[1000&#8226;ln(1/&#949;)] Z i , we have E[Z] 200 &#8226; ln(1/&#949;), and thus by standard coupling arguments and Chernoff bounds, we have that</p><p>But if the path does not terminate within 1000&#8226;ln(1/&#949;) steps and Z 100&#8226;ln(1/&#949;), then z(B(P )) 20 &#8226; ln(1/&#949;).</p><p>We now prove Theorem 3.22.</p><p>Proof of Theorem 3.22. By Lemma 3.16, the probability that H &#964; has a primary path P with z(B(P )) <ref type="bibr">20</ref> &#8226; ln(1/&#949;) starting at v is at most &#949; 10 . Thus, for a bad vertex u, the probability that H &#964; has some primary path P rooted at u with |P | 2000 &#8226; ln(1/&#949;) and z(B(P )) 20 &#8226; ln(1/&#949;) is at least &#949; 6&#949; 10 &#949; 6 /2.</p><p>Let k = 20 &#8226; ln(1/&#949;) and &#8467; = 2000 &#8226; ln(1/&#949;). Let P u be the set of all primary paths P rooted at u such that z(B(P )) k and |P | = &#8467; starting at u. Since all such primary paths with length more than &#8467; are extensions of those with length exactly &#8467;, we have P &#8712;Pu Pr[P is in H &#964; ] &#949; 6 /2. For each such path P &#8712; P u , consider the two vertices w P &#8467; and w P &#8467;-1 at distances &#8467; and &#8467; -1 respectively from u. For each such vertex w P j (j &#8712; {&#8467; - </p><p>1 follows because y w 's form a feasible dual solution (to the vertex cover problem).</p><p>On the other hand, consider how many times each vertex is charged. For this, for every vertex w, let Q w be the set of primary predecessor paths Q of u such that |Q| = &#8467; -1 and z(B(P )) k. As |Q| = &#8467; -1 1000 &#8226; ln(1/&#949;) for all Q &#8712; Q w , by Lemma 3.23, Q&#8712;Qw Pr[Q is in H &#964; ] &#949; 10 . For a primary predecessor path Q &#8712; Q w (or one of its extensions), the vertex w can be charged at most twice according to the above charging mechanism. Since any predecessor path of w with length more than &#8467; -1 must be an extension of one with length exactly &#8467; -1, we have that the amount w is charged is at most</p><p>Summing over all w &#8712; V and using Lemma 3.3, the total charge is at most </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.7">Calculating the Competitive Ratio of Algorithm 3</head><p>We now show that the competitive ratio of Algorithm 3 is indeed (1/2 + &#945;) competitive for some sufficiently small absolute constant &#945; &gt; 0, thus proving Theorem 1.2. This essentially combines the facts that for good vertices, the matching probability is very close to the fractional values of incident edges, and that the number of bad vertices is very small compared to the total value of the fractional algorithm (over the entire graph). where the last line holds for a sufficiently small constant &#949; &gt; 0.</p><p>2. y 0 = 0 (thus &#952; = 1), then the increase in u's fractional degree was:</p><p>For every subsequent increase of the fractional degree due to a newly-arrived vertex we have that:</p><p>Which concludes the proof for the lower bound.</p><p>For the upper bound, by [25, Invariant 1], we have that</p><p>This upper bound can be simplified by using Equation ( <ref type="formula">5</ref>), as follows. Taking (30), adding and subtracting 1 + f (1y u ) and writing the integral C Deferred Proofs of Section 3.4.2</p><p>In this section we present the proofs deferred from Section 3.4.2. We start by presenting a more manageable form for the function f = f 1+2&#949; which we use.</p><p>A function in the WW family is determined by a parameter k 1 and takes the following form</p><p>.</p><p>Letting &#954; = 1 + 2&#949;, we get that f := f &#954; is of the form</p><p>.</p><p>Clearly this is water filling when &#949; = 0 and otherwise we have that the first term is like water filling and then the second term is less than 1 for z 1/2 and greater than 1 if z &gt; 1/2.</p><p>By Taylor expansion, we obtain the following more manageable form for f .</p><p>Lemma C.1. There exists &#949; 0 &#8712; (0, 1) such that for every &#949; &#8712; (0, &#949; 0 ) and every &#952; &#8712; [0, 1], we have</p><p>Proof. Taking the Taylor expansion of e x , we find that</p><p>To be precise, for &#952; &#8712; [0, 1] and 0 &lt; &#949; &#949; 0 1 (implying for example &#952;+&#949; 1+&#949;-&#952; 2 &#949; ), we will show that terms dropped in the third, fourth and fifth lines are all at most some O((ln( 1 &#949; ) &#8226; &#949;) 2 ) = o(&#949;), from which the lemma follows as the sum of these terms is at most 0.01&#949; for &#949; &#949; 0 and &#949; 0 sufficiently small. Indeed, in the third line, we dropped</p><p>where the last step used that ln(1/&#949;) &#8226; &#949; 1 holds for all &#949; 0. In the fourth line, we dropped</p><p>Finally, in the fifth line, we dropped</p><p>Given this more manageable form for f , we can now turn to prove Lemma 3.10, restated below.</p><p>Lemma 3.10. (Basic bounds on conditional probabilities z u ) There exist absolute constants c &#8712; (0, 1) and C &gt; 1/c &gt; 1 and &#949; 0 &#8712; (0, 1) such that for every &#949; &#8712; (0, &#949; 0 ) the following holds: for every vertex v &#8712; V , if y u is the dual variable of a neighbor u &#8712; N v (v) before v's arrival and &#952; is the value chosen by Algorithm 1 on v's arrival, then for z u as defined in Algorithm 3, we have:</p><p>(1) If &#952; &#8712; (c, 1c), then u&#8712;Nv(v) z u 1,</p><p>(2) If &#952; &#8712; [0, 1], then u&#8712;Nv(v) z u 1 + C&#949;,</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0"><p>The case of n odd is similar. As it is unnecessary to establish the result of this theorem, we omit it.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1"><p>Here and throughout the paper, we let x + := max{0, x} for all x &#8712; R.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2"><p>It is also possible to compute them exactly if we allow the algorithm to take exponential time.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3"><p>To be precise, conditioning on a primary path P with a so-called termination certificate T , see Definition 3.13. In the overview, we omit this detail and consider the event EQ P,T (instead of EQ P ) in the formal proof.</p></note>
		</body>
		</text>
</TEI>
