<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Static vs. Adaptive Security in Perfect MPC: A Separation and the Adaptive Security of BGW</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>2022 June</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10415405</idno>
					<idno type="doi"></idno>
					<title level='j'>Leibniz international proceedings in informatics</title>
<idno>1868-8969</idno>
<biblScope unit="volume">230</biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Gilad Asharov</author><author>Ran Cohen</author><author>Oren Shochat</author><author>Dana Dachman-Soled</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[Adaptive security is a highly desirable property in the design of secure protocols. It tolerates adversaries that corrupt parties as the protocol proceeds, as opposed to static security where the adversary corrupts the parties at the onset of the execution. The well-accepted folklore is that static and adaptive securities are equivalent for perfectly secure protocols. Indeed, this folklore is backed up with a transformation by Canetti et al. (EUROCRYPT'01), showing that any perfectly secure protocol that is statically secure and satisfies some basic requirements is also adaptively secure. Yet, the transformation results in an adaptively secure protocol with inefficient simulation (i.e., where the simulator might run in super-polynomial time even if the adversary runs just in polynomial time). Inefficient simulation is problematic when using the protocol as a sub-routine in the computational setting.Our main question is whether an alternative efficient transformation from static to adaptive security exists. We show an inherent difficulty in achieving this goal generically. In contrast to the folklore, we present a protocol that is perfectly secure with efficient static simulation (therefore also adaptively secure with inefficient simulation), but for which efficient adaptive simulation does not exist (assuming the existence of one-way permutations).In addition, we prove that the seminal protocol of Ben-Or, Goldwasser and Wigderson (STOC'88) is secure against adaptive, semi-honest corruptions with efficient simulation. Previously, adaptive security of the protocol, as is, was only known either for a restricted class of circuits, or for all circuits but with inefficient simulation.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Secure multiparty computation (MPC) <ref type="bibr">[31,</ref><ref type="bibr">20]</ref> enables a set of mutually distrustful parties to compute a joint function while keeping the privacy of their inputs and the correctness of their outputs. The seminal results from the '80s <ref type="bibr">[31,</ref><ref type="bibr">20,</ref><ref type="bibr">5,</ref><ref type="bibr">11,</ref><ref type="bibr">30]</ref> as well as the vast majority 15:2</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Static vs. Adaptive Security in Perfect MPC</head><p>of MPC protocols in the literature were proven secure with respect to a static adversary; that is, security is guaranteed as long as the adversary decides which parties to corrupt at the onset of the execution. A more realistic setting, first considered by Beaver and Haber <ref type="bibr">[4]</ref> and by Canetti et al. <ref type="bibr">[10]</ref>, considers an adaptive adversary, who can dynamically decide which parties to corrupt during the course of the protocol. Adaptive security is known to be strictly stronger than static security with many impossibility results separating the two notions, e.g., <ref type="bibr">[10,</ref><ref type="bibr">9]</ref>.</p><p>In this work we focus on the well-studied setting of perfect security, where all existing separations from the literature no longer hold. Perfectly secure protocols guarantee security facing computationally unbounded adversaries without any error probability. This is a highly desirable property that in some sense provides the strongest security notion for MPC. For example, Kushilevitz et al. <ref type="bibr">[24]</ref> showed that any perfectly secure protocol that is proven secure in the standalone model using a straight-line and black-box simulation <ref type="foot">1</ref> automatically guarantees security under universal composition (UC) <ref type="bibr">[8]</ref>. <ref type="foot">2</ref> Most relevant to our work, Canetti et al. <ref type="bibr">[9]</ref> showed that any perfect statically secure protocol, satisfying basic requirements, remains secure also in the presence of adaptive adversaries. This result, however, comes with a caveat, since the transformation from static to adaptive security does not preserve the efficiency of the simulation.</p><p>Roughly speaking, a protocol is deemed secure if any attack on an execution of the protocol in the real world can be simulated also in an ideal world where a trusted party receives the inputs from all the parties and computes the function on their behalf. It is desirable that the simulator, i.e., the adversary who simulates the attack in the ideal world, will use roughly the same resources as the adversary who carries out the attack on the real-world protocol. In particular, we would like the simulator to run in polynomial time with respect to the running time of the real-world adversary; if so, we say that the simulation is efficient.</p><p>Unfortunately, the adaptive simulator in <ref type="bibr">[9]</ref> does not run in polynomial time with respect to the real-world adversary. This means that given a perfectly secure protocol against static adversaries with an efficient simulator, the transformation in <ref type="bibr">[9]</ref> guarantees adaptive security, albeit with an inefficient simulator. This leads us to the following fundamental question: Does perfect, static security with efficient simulation imply perfect, adaptive security with efficient simulation? Stated differently, is there another generic transformation from static to adaptive security, other than <ref type="bibr">[9]</ref>, that preserves the efficiency of the simulation? Are there other assumptions on the structure of the protocol and/or on the static simulation that might lead to such an efficient transformation? Or, perhaps, do there exist protocols that are statically secure but for which efficient adaptive simulation simply does not exist?</p><p>Efficient vs. inefficient simulation. One may ask whether the efficiency loss in the simulation makes a difference when considering perfect security: If security is anyway guaranteed against computationally unbounded adversaries, does it matter if the simulator is inefficient? Indeed, inefficient simulation is a weaker-yet-acceptable security notion when considering informationtheoretic security.</p><p>It turns out that inefficient simulation has an undesirable impact when considering composition of secure protocols. For example, consider a perfectly secure protocol &#960; for computing a function f that is defined over secure communication channels, and consider a computationally secure realization of secure channels over authenticated channels (e.g., using non-committing encryption <ref type="bibr">[10]</ref>). If &#960; is secure with efficient simulation, then a composition theorem (e.g., from <ref type="bibr">[7,</ref><ref type="bibr">8]</ref>) can be used to derive a computationally secure protocol for f over authenticated channels. However, if the simulation is inefficient then the composition will not go through since it will result with a super-polynomial time adversary that can break the cryptographic assumptions used to realize secure channels.</p><p>A case study: on the adaptive security of the BGW protocol. The seminal results of Ben-Or, Goldwasser, and Wigderson <ref type="bibr">[5]</ref> and of Chaum, Cr&#233;peau, and Damg&#229;rd <ref type="bibr">[11]</ref> show that any function f can be computed by an n-party protocol with perfect security as long as t &lt; n/2 of the parties are corrupted in the semi-honest setting, and as long as t &lt; n/3 are corrupted in the malicious setting. A full proof of security for the BGW protocol was given in 2011 by Asharov and Lindell <ref type="bibr">[1,</ref><ref type="bibr">2]</ref>. The proof was specified in the static, standalone setting, and universally composable security and security against adaptive adversaries were derived using the transformations of Kushilevitz et al. <ref type="bibr">[24]</ref> and Canetti et al. <ref type="bibr">[9]</ref>, respectively. However, as discussed above, adaptive security is obtained with an inefficient simulation.</p><p>This issue was revisited by Damg&#229;rd and Nielsen <ref type="bibr">[15]</ref>, who showed that the semi-honest version of the BGW protocol achieves adaptive security with efficient simulation. However, the result of <ref type="bibr">[15]</ref> holds only for circuits in which each output wire is a direct output of a multiplication gate. Obviously, one can manually add such "multiplications with 1" to each output wire. While this seems suffice, there are two reasons why we revisit this problem: First, for linear functions, the semi-honest version of the BGW protocol can tolerate up to n corruptions, whereas the requirement that each output wire is a direct output of a multiplication gate reduces the corruption threshold to t &lt; n/2. Second, this subtlety has been neglected by prior works that relied on <ref type="bibr">[15]</ref>, e.g., Lin et al. <ref type="bibr">[26,</ref><ref type="bibr">Lem. 6.2]</ref> claimed that any degree-2 function can be computed by the BGW protocol in two rounds with adaptive security and efficient simulation. Yet, when adding multiplication gates the round complexity increases, which implies a gap in the literature as there is no proof for the claim mentioned in <ref type="bibr">[26]</ref>.</p><p>Alternatively, one can interpret this additional restriction on the circuit as adding some re-randomization step at the very end of the protocol before the parties reconstruct their output. Is this step essential to achieve adaptive security for all circuits? Can one prove the adaptive security of the original protocol directly, without the additional communication round?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Our Results</head><p>Our work revisits the question of static versus adaptive in perfectly secure multiparty computation. We show that in contrast to the "weaker" definition of adaptive security (i.e., inefficient simulation), perfect static security no longer implies perfect adaptive security when demanding the simulation to be efficient, even when merely considering semi-honest adversaries. Complementarily, we show that the BGW protocol is adaptively secure with efficient simulation even without changing the underlying circuit; this answers an open question posed by Damg&#229;rd and Nielsen <ref type="bibr">[15]</ref>. We focus on semi-honest security, which is I T C 2 0 2 2 15:4</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Static vs. Adaptive Security in Perfect MPC</head><p>enough to highlight the subtleties that arise; indeed, the gap in the literature discussed above already appears in this setting. We conjecture that the analysis extends to the malicious case using standard secret-sharing techniques.</p><p>We proceed to describe our results in more details.</p><p>Separating perfect adaptive security from perfect static security. Our first result shows that under some cryptographic assumptions, there is no hope of finding an alternative (efficient) transformation to that of Canetti et al. <ref type="bibr">[9]</ref>, and that inefficiency of the adaptive simulation is inherent. More precisely, that there exist protocols that admit perfect, static security with efficient simulation but for which an efficient adaptive simulation does not exist.</p><p>&#9654; Theorem 1. Assume the existence of a one-way permutation. Then, there exists an n-party functionality f and a protocol that securely computes f with efficient perfect static simulation, but for which efficient perfect adaptive simulation does not exist.</p><p>The theorem is proven by showing a protocol for which all the additional requirements of <ref type="bibr">[9]</ref> hold and therefore (inefficient) adaptive simulation does exist for this protocol, but an efficient adaptive simulator can be used to invert the one-way permutation with inversepolynomial probability. Interestingly, this implies that the protocol is regarded as adaptively secure in the perfect setting, but the exact same protocol is not adaptively secure in the computational setting, where both the adversary and the simulator should run in polynomial time in the security parameter. We therefore derive the following somewhat counter-intuitive corollary.</p><p>&#9654; Corollary 2. Assume the existence of a one-way permutation. Then, there exists an n-party functionality f and a protocol that securely computes f with perfect adaptive security, but that does not securely compute f with computational adaptive security.</p><p>Revisiting adaptive security in the standalone setting. The above does not rule out the possibility of finding additional requirements from the protocol that would imply efficient adaptive simulation. This is exactly the approach taken by Damg&#229;rd and Nielsen <ref type="bibr">[15]</ref>, who showed that under additional requirements of the protocol, an efficient adaptive simulation exists.</p><p>The transformation of <ref type="bibr">[15]</ref> is directly proved in the UC framework with its full generality, capturing reactive functionalities and concurrency issues at once. However, the strong guarantees do not come without a price, since the requirements from the static simulator must capture multiple input phases (as required for reactive computations) and deal with the technical overhead needed for concurrent composition, e.g., incorporating an "online" environment to the definition. As a small side contribution we simplify this transformation.</p><p>Specifically, in addition to proving perfect static security, the transformation of <ref type="bibr">[15]</ref> requires an additional (efficient!) algorithm, called "Patch," for sampling randomness that explains the simulated protocol whenever a corruption occurs. The adaptive simulator invokes the static simulator on a dynamically growing set of corrupted parties (initially empty). At any point, the simulation of the protocol towards the adaptive adversary is done by forwarding messages from the adaptive adversary to the static simulator, and vice-versa. Upon a corruption of a new party, say of P i , the Patch algorithm receives the state of the static simulator until this point together with the input and output of P i , and outputs a new state for the static simulator that allows the continuation of the simulation as if P i was statically corrupted from the beginning.</p><p>We then propose an alternative recipe for proving universal composability and (efficient) adaptive security in the perfect setting: 1. Prove that the protocol is (efficiently) statically secure in the perfect standalone setting and that the protocol satisfies some natural requirements (similar to those in <ref type="bibr">[9]</ref>). 2. Show the existence of an efficient "Patch" algorithm corresponding to the static simulator. 3. We show a transformation (which is essentially a distilled version of <ref type="bibr">[15]</ref>) that the protocol is perfectly adaptively secure with an efficient simulator in the standalone setting. 4. Using <ref type="bibr">[24]</ref>, the protocol is also secure in the perfect adaptive setting with efficient simulation and with universal composability.</p><p>We remark that this result is not technically novel and is inspired by <ref type="bibr">[15]</ref>. We hope that providing an alternative definition in the standalone setting would simplify proofs of efficient adaptive security in the future, as the designer of the protocol can focus on the standalone setting.</p><p>Adaptive security of the BGW protocol. Finally, we follow our recipe proposed above and show that the (semi-honest) BGW protocol is efficiently adaptively secure. No additional step to the protocol is needed, and the proof works for any circuit (as opposed to the proof of <ref type="bibr">[15]</ref>). This result solves an open problem raised by <ref type="bibr">[15]</ref>, whether the assumption on the circuit (that each output wire is a direct output of a multiplication gate) is necessary. This reaffirms the security of the BGW protocol <ref type="bibr">[5]</ref>, and closes a gap in the proofs of <ref type="bibr">[1,</ref><ref type="bibr">15]</ref>, providing sound foundations for cryptography.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>&#9654; Theorem 3. Let f be a deterministic n-party functionality. The BGW protocol securely computes f with perfect adaptive security and efficient simulation facing a semi-honest adversary corrupting t &lt; n/2 parties.</head><p>Further, if f is a linear function, the BGW protocol securely computes f with perfect adaptive security and efficient simulation facing a semi-honest adversary corrupting t &lt; n parties.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Related Work</head><p>Adaptive security is known to be strictly stronger than static security in many settings, with many impossibility results separating the two notions. Below, we compare our separation to existing separations between static and adaptive security from the literature.</p><p>The first separation was presented by <ref type="bibr">[10]</ref> and relied on a positive error probability of the statically secure protocol. They considered a dealer that secret shares a value to a random set of parties and later announces their identities; an adaptive adversary can corrupt the members of the set and learn the value while a static adversary can only guess this set ahead of time and succeed with negligible (yet positive) probability. In this setting, the seminal protocol of Rabin and Ben-Or <ref type="bibr">[30]</ref> that guarantees statistical information-theoretic security is not secure in the adaptive setting, as illustrated by Cramer et al. <ref type="bibr">[14]</ref>, who gave an adaptively secure variant of the protocol. Separations based on statistical security are different than ours as we consider perfect protocols that have zero error probability.</p><p>In the computational-security setting, many statically secure primitives do not remain secure under adaptive corruptions. For example, Nielsen <ref type="bibr">[29]</ref> showed that public-key encryption for unbounded messages requires a programmable random oracle. Canetti et al. <ref type="bibr">[9]</ref> separated adaptively secure commitments from statically secure ones. Lindell and Zarosim <ref type="bibr">[27]</ref> showed that achieving adaptively secure oblivious transfer (OT) requires stronger assumptions than statically secure OT. Katz et al. <ref type="bibr">[23]</ref> ruled out adaptively secure fully homomorphic I T C 2 0 2 2 15:6 Static vs. Adaptive Security in Perfect MPC encryption; the latter result was generalized in <ref type="bibr">[13]</ref>. Separations based on computational assumptions are different than ours as we consider information-theoretic protocols that remain secure against computationally unbounded adversaries.</p><p>Canetti et al. <ref type="bibr">[9]</ref> showed a separation based on the ability of the adversary to corrupt a party and change its input to the protocol based on messages that have already been transmitted. Similar separations were illustrated also for broadcast protocols <ref type="bibr">[22,</ref><ref type="bibr">12]</ref>. Canetti et al. <ref type="bibr">[9]</ref> showed that such separations no longer hold when considering protocols that have a committal round <ref type="bibr">[28]</ref>, i.e., a fixed round in which all inputs to the protocol are committed. These separations hold only for malicious adversaries that can deviate from the protocol; our separation applies for semi-honest adversaries that cannot deviate from the protocol in any way.</p><p>Other separations are known when considering restricted interaction patterns, as was shown by Garay et al. <ref type="bibr">[17]</ref> for protocols with sublinear communication, and by Boyle et al. <ref type="bibr">[6]</ref> for protocols whose communication graph admits a sublinear cut. Our separation relies on the BGW protocol that induces a complete communication graph.</p><p>Finally, Garg and Sahai <ref type="bibr">[18]</ref> showed that constant-round MPC with black-box simulation in the plain model cannot tolerate corruption of all of the parties. Our result holds irrespective of the number of rounds and does not require corrupting all the parties.</p><p>Organization of the paper. In Section 2 we present a technical overview of our results, in Section 3 the preliminaries, and in Section 4 we prove our separation result, showing the existence of a protocol that has inefficient adaptive simulation but no efficient adaptive simulation (assuming one-way permutations exist). Due to space limitation we omit the definition of secure multiparty computation and the proof of adaptive security of the BGW protocol, and refer the reader to the full version of this paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Technical Overview</head><p>We provide a technical overview of our results. In Section 2.1 we review our separation result, showing the existence of a protocol that has inefficient adaptive simulation but no efficient adaptive simulation (under some cryptographic assumptions). In Section 2.2, we review the adaptive security of the BGW protocol.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Static Security Does Not Imply Adaptive Security</head><p>Definition of adaptive security. We first recall the definition of adaptive security. We remark that since the transformation of <ref type="bibr">[9]</ref> assumes some additional properties from the statically secure protocol and its simulation (specifically, it assumes that the protocol has a straight-line, black-box simulation), our description here incorporates those properties in the informal definition. Given a protocol &#960;, an adaptive adversary might corrupt parties on the fly as the protocol proceeds. Upon corruption, the adversary sees the corrupted party's random tape, the input it used, and all the messages it received so far. From that point on, the adversary completely controls the behavior of P i . As the protocol proceeds, the adversary might decide to corrupt additional parties, as a function of whatever it saw so far.</p><p>We follow the ideal/real simulation paradigm and say that a protocol is adaptively secure if for every such an adversary in the real world, there exists a simulator in the ideal world that simulates its behavior. In the ideal world, the simulator invokes the real-world adversary and simulates the honest parties sending their message to the corrupted parties (without knowing the inputs of the honest parties). At every round, the adversary might ask to corrupt some party P i in the simulated protocol, and the simulator corrupts the corresponding party in the ideal world. Upon corruption, the simulator learns the input of P i (and its output, if it was already computed by the trusted party), and it has to provide the adversary the input together with randomness that explains the messages sent by this party in the simulation until this point (known also as "equivocality" in the literature). This is the challenging part of the proof as the simulator has to first simulate P i without knowing its input, "commit" to some messages on its behalf, and later upon corruption find a randomness r i that explains all the messages that were sent so far according to the input x i . Note that the simulator is not allowed to rewind the adversary at any point of the execution, i.e., the simulation is "straight-line."</p><p>First attempt. The transformation of <ref type="bibr">[9]</ref> shows that for any perfectly secure protocol, randomness r i describing the behavior of P i so far, exists. This implies that the simulator can always find it; however, finding it might not be an efficient procedure. Our first attempt is adding some cryptographic hardness while targeting exactly this procedure of finding the matching randomness. That is, our goal is making the finding of the randomness a computationally hard problem.</p><p>It is intuitive to simply take the BGW protocol, even just for semi-honest security and for computing a linear function, with one modification: Before a party starts the protocol, it takes its random tape r, and uses OWP(r) as its randomness in the protocol, where OWP is a one-way permutation. The intuition is that whenever the adversary asks to corrupt some party, the simulator would effectively have to invert the one-way permutation, which is computationally infeasible. The construction is still statically secure since the static simulator only goes "forward"; that is, it chooses some randomness r and then uses OWP(r) as the randomness of the simulated honest party. Moreover, an efficient simulation exists since r exists; however, it seems that an adaptive adversary will have to move "backward" and invert the one-way permutation.</p><p>To elaborate further, let n be the number of parties, and consider the function f (a 1 , . . . , a n ) = n i=1 a i ; that is, all parties receive the same output, which is the sum of their inputs. We consider some finite field F with |F| &gt; n. The protocol is as follows:</p><p>Input: P i has a i &#8712; F as input, and randomness r i . The protocol: 1. P i computes (r i,1 , . . . , r i,n-1 ) = OWP(r), where each r i,j &#8712; F. 2. P i secret shares its input a i using Shamir's secret sharing scheme with degree n -1, using the polynomial h i (x) = a i + r i,1 x + . . . + r i,n-1 x n-1 . It gives to each party P j privately a point h i (&#945; j ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.</head><p>Upon receiving h 1 (&#945; i ), . . . , h n (&#945; i ) from all the parties, the party P i computes &#946; i = n i=j h j (&#945; i ) and sends &#946; i to all other parties. 4. Upon receiving &#946; 1 , . . . , &#946; n , each party reconstructs the unique polynomial H(x) for which it holds for every &#945; j that H(&#945; j ) = &#946; j , and outputs H(0).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Simulating this protocol.</head><p>The above protocol can be simulated for every t &lt; n parties. Consider an adaptive simulator, and assume that the adversary already corrupted n -2 parties on the onset of the execution, say P 3 , . . . , P n . The simulator then simulates just two honest parties: P 1 and P 2 . As it does not know their inputs it just gives the adversary 2&#8226;(n-2) independent random points as the messages of the two honest parties it is simulating. For concreteness, assume that the messages the simulated P 1 sent are (&#947; 3 , . . . , &#947; n ), where &#947; i I T C 2 0 2 2 15:8</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Static vs. Adaptive Security in Perfect MPC</head><p>is supposed to be delivered to P i . Now, assume that the adversary asks to corrupt P 1 and that the simulator receives its input a 1 . The simulator can now find a random polynomial h 1 (x) of degree n -1 under the constraints that</p><p>Note that these constraints define overall n -1 points. Since h 1 (x) has n coefficients, there are |F| different polynomials that satisfy these n -1 constraints. The simulator can pick one more point at random, and then uniquely determine the polynomial h 1 (x) using interpolation.</p><p>For that particular polynomial, the simulator has to invert the one-way permutation and find the corresponding randomness.</p><p>The problem -the reduction to OWP. It is hard to use the above adaptive simulator to construct an inverter for the one-way permutation on some challenge point y * &#8712; |F| n-1 , corresponding to the n -1 leading coefficients, say y * = (r 1,1 , . . , r 1,n-1 ). The problem is that once the adaptive simulator "commits" to the values (&#947; 3 , . . . , &#947; n ), it might be the case that there is no input a 1 for which the polynomial</p><p>That is, there is no input a 1 in the support that agrees with the challenge y * and with the simulated view, simultaneously.</p><p>Second attempt. To solve the above technicality, we change the functionality and the protocol. Instead of having the input of each party P i be just a i &#8712; F, it is augmented to be a polynomial g i (x) of degree n -1 over F. The functionality is then defined as:</p><p>Note that the parties input polynomials, while all the leading coefficients do not affect the output of the computation. In the protocol, each party P i then invokes (r i,1 , . . . , r i,n-1 ) = OWP(&#961; i ) where &#961; i is its random tape, and then defines the polynomial h i (x) . . = g i (x) + r i,1 x + . . . + r i,n-1 x n-1 . It then sends to each other party P j the point h i (&#945; j ), just as in</p><p>Step 2 in the previous protocol.</p><p>The difference is that now, no matter what points (&#947; 3 , . . . , &#947; n ) the adversary commits to as the messages P 1 had sent, for any challenge y * = (r * 1 , . . . , r * n-1 ), the inverter of the OWP can choose an input g 1 (x) such that the polynomial g 1 (x) + r * 1 x + . . . + r * n-1 x n-1 is in the support of the polynomials that the adaptive simulator might choose. To see that, given a challenge y * = (r * 1 , . . . , r * n-1 ) to the inverter, and given (&#947; 3 , . . . , &#947; n ) that were chosen by the simulator as the messages P 1 had sent in the first round, the inverter chooses two additional points &#947; 1 and &#947; 2 at random. It then interpolates the unique polynomial h(x) such that for every i &#8712;</p><p>), and gives g 1 (x) to the adaptive simulator as the input of P 1 .</p><p>The simulator now has to reply with some randomness &#961; &#8242; for which (r</p><p>) and h &#8242; 2 (&#945; 2 ) are not determined, the simulator essentially has |F| 2 different polynomials to chose from. However, one of them corresponds to the challenge y * . Moreover, y * is distributed uniformly in the support of all valid solutions for the simulator. Since the adaptive simulator simulates with perfect security, the inverter succeeds in inverting y * with probability 1/|F| 2 . By tuning the parameters (such that |F| is polynomial in the security parameter), this is an inverse-polynomial advantage. Assuming that OWP is a one-way permutation, this implies that the adaptive simulator cannot run in polynomial time.</p><p>Our static simulator. To guarantee such consistencies, our static simulator generates t random shares on each input wire and each output of a multiplication wire. That is, while the adversary corrupt some set I, the simulator, "in its head," chooses an arbitrary set &#206; &#8839; I of cardinality exactly t and simulates the shares for all parties in &#206;, while giving the adversary only shares for the parties in I. As a result, we have no random choice when simulating the output wires. The simulator holds t shares on each output wire, and also knows the constant term on the output wires of the corrupted parties. It can deterministically interpolate the polynomials on the output wire, and simulate the honest parties sending their shares on those wires. This guarantees that if there is any dependency between output wires, then the simulator provides consistent shares.</p><p>Our adaptive simulator. Assume that the adaptive adversary corrupts some party P i * . If i * &#8712; &#206; \ I, then providing the view of that party is easy. The simulator has already sampled all the shares that P i * is supposed to receive. On the other hand, if i * &#824; &#8712; &#206; \ I, then we face a new obstacle. This is because the simulator has already defined t shares on each wire, and defining an additional share on each wire would completely determine each polynomial. Let j * &#8712; &#206; \ I. The key idea of our adaptive simulator is to "replace" the sampling of shares for party P j * with sampling shares for party P i * . Specifically, we show that each random choice made for P j * that leads to the current view of the adversary, can also be interpreted by a random choice made for P i * that leads to the exact same view. This procedure then changes the set of the simulator and "forgets" all the random choices made for P j * , but instead samples matching choices for P i * . This is essentially sampling random shares for P i * on the input wires of all honest parties and the outputs of f mult , under the constraints posed by the shares of P j * on the output wires. Such sampling can be performed efficiently by solving a linear set of equations. Then, we are back to the previous case, where the corruption is made on some i * &#8712; &#206; \ I.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Preliminaries</head><p>Our results hold in any natural model that captures perfect adaptive security, for example, <ref type="bibr">[10,</ref><ref type="bibr">7,</ref><ref type="bibr">16,</ref><ref type="bibr">8,</ref><ref type="bibr">25,</ref><ref type="bibr">21]</ref>. For concreteness, we will state our results using the modular (nonconcurrent) composability framework of Canetti <ref type="bibr">[7]</ref>. Indeed, the separation in this limited setting extends to any of the models listed above, whereas our positive results translate to the universal-composability setting via the transformation in <ref type="bibr">[24]</ref>. Before describing the security model, we give basic notation and define the cryptographic primitive used in our separation.</p><p>Notation. We denote by &#955; the security parameter. For n &#8712; N, let [n] = {1, . . . , n}. Let poly denote the set of all positive polynomials and let PPT denote a probabilistic algorithm that runs in strictly polynomial time. A function &#957; :</p><p>for every p &#8712; poly and large enough &#955;. Given a random variable X, we write x &#8592; X to indicate that x is selected according to X, and given a set X we write x &#8592; X to indicate that x is selected uniformly at random from X .</p><p>One-way permutations. Our separation in Section 4 will rely on the existence of a one-way permutation (OWP); that is a one-way function which is length preserving and one-to-one; we refer the reader to <ref type="bibr">[19]</ref> for more details.</p><p>&#9654; Definition 4 (OWP). A polynomial-time function f : {0, 1} * &#8594; {0, 1} * is a one-way permutation if the following conditions are satisfied. 1. For every &#955; &#8712; N, f induces a permutation over {0, 1} &#955; , i.e., f : {0, 1} &#955; &#8594; {0, 1} &#955; is one-to-one and onto. 2. There exists a deterministic polynomial-time algorithm A such that on input x &#8712; {0, 1} * the algorithm A outputs f (x). 3. For every PPT algorithm A, every positive polynomial p(&#8226;), and all sufficiently large &#955;'s, it holds that</p><p>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Static Security Does Not Imply Adaptive Security</head><p>In this section we prove Theorem 1. We show a functionality together with a protocol that privately computes it with perfect static security and efficient simulation. Further, we show that if the protocol privately computes the functionality with perfect adaptive security and efficient simulation, then one-way permutations do not exist. We start by defining the functionality and the protocol. Next, in Lemma 6 we prove static security and in Lemma 9 we prove that adaptive security with efficient simulation cannot be achieved assuming OWP. Combined, this proves Theorem 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">The Functionality</head><p>The n-party functionality f sum is parametrized by a finite binary field F 2 &#8467; such that 2 &#8467; &gt; n. Looking ahead, having a binary field will enable using a one-way permutation OWP : {0, 1} * &#8594; {0, 1} * that is applied on a vector of field elements in a clean way. During the proof below, we will denote the field by F . . = F 2 &#8467; . The private input of every party is a polynomial of degree (at most) n -1 over F, and the common output is the sum of the free coefficients.</p><p>Input: The input of party P i is a polynomial g i (x) of degree n -1, that is define by n coefficients a i,0 , . . . , a i,n-1 &#8712; F such that</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">The Protocol</head><p>The n-party protocol &#960; sum is parametrized by the field F and assumes the existence of a one-way permutation OWP :</p><p>Protocol 5: Separating Protocol &#960; sum .</p><p>Private input: The input of party P i is a polynomial g i (x) of degree n -1 over F. Randomness: The random tape of party P i is &#961; i &#8592; F n-1 . Common inputs: n distinct non-zero elements &#945; 1 , . . . , &#945; n &#8712; F. The protocol: code for party P i : 1. Compute (r i,1 , . . . , r i,n-1 ) = OWP(&#961; i ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Consider the polynomial</head><p>I T C 2 0 2 2 15:12 Static vs. Adaptive Security in Perfect MPC 3. Send to every P j its share &#947; i&#8594;j . . = h i (&#945; j ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4.</head><p>Having received all shares &#947; 1&#8594;i , . . . , &#947; n&#8594;i , party P i locally computes &#946; i = n j=1 &#947; j&#8594;i and sends &#946; i to all other parties. 5. Having received &#946; 1 , . . . , &#946; n , party P i finds the unique degree-(n -1) polynomial H(x)</p><p>satisfying H(&#945; j ) = &#946; j for every j &#8712; [n]. Output: H(0).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Static Security</head><p>We start by proving static security of &#960; sum . We note that static security holds even if OWP is simply a permutation that is not necessarily one way and can be inverted efficiently.</p><p>&#9654; Lemma 6. Protocol &#960; sum statically (n -1)-privately computes f sum with perfect semi-honest security and efficient simulation.</p><p>Proof. Let A be a static semi-honest adversary and let I &#8834; [n] of size |I| &lt; n denote the set of corrupted parties' indices. We construct a simulator S as follows:</p><p>1. The simulator initially receives auxiliary information z and the inputs of the corrupted parties (g i (x)) i&#8712;I . First, S sends (g i (x)) i&#8712;I to the trusted party and receives back the output value y. Next, S invokes A on z and (g i (x)) i&#8712;I . 2. To simulate the first round, for every j / &#8712; I, the simulator chooses a random &#961; j &#8592; F n-1 and computes (r j,1 , . . . , r j,n-1 ) = OWP(&#961; j ). Next, S chooses arbitrary polynomials (g j (x)) j&#824; &#8712;I , each of degree at most n -1, under the constraint that j&#824; &#8712;I gj (0) = y -i&#8712;I g i (0), and defines for every j / &#8712; I h j (x) = gj (x) + r j,1 x + . . . + r j,n-1 x n-1 .</p><p>Finally, for every j / &#8712; I and i &#8712; I, the simulator sends &#947; j&#8594;i . . = h j (&#945; i ) to A as the message from an honest P j to a corrupted P i , and receives the messages &#947; i&#8594;j from A as the message from a corrupted P i to an honest P j . 3. To simulate the second round, for every j / &#8712; I, the simulator computes &#946; j . . = n k=1 &#947; k&#8594;j and sends &#946; j as the message from P j . Next, S receives the message &#946; i from A on behalf of every corrupted P i . 4. Finally, S outputs whatever A outputs, and halts.</p><p>Note that by construction, the simulator S runs in polynomial time in the running time of A.</p><p>Proof. Note that the only difference between the real execution of the protocol and the simulated execution in the ideal world is the construction of the constant terms of the polynomials (h j (x)) j&#824; &#8712;I :</p><p>In the real protocol, these constant terms are the constant terms of the original inputs of the honest parties (g j (x)) j / &#8712;I . In this case, by the linearity of Shamir's secret sharing and by the correctness of the interpolation, it holds that the output equals n i=1 g i (0).</p><p>In the simulated execution, these constant terms are the constant terms of the polynomials (g j (x)) j / &#8712;I . These polynomials are generated under the constraint that j&#824; &#8712;I gj (0) = y -i&#8712;I g i (0), where y is computed by the trusted party to be n i=1 g i (0). It follows that j&#824; &#8712;I gj (0) = j&#824; &#8712;I g j (0).</p><p>The claim now follows by the perfect privacy of Shamir's secret sharing. &#9665;</p><p>This concludes the proof of Lemma 6. &#9664;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Inefficient Adaptive Security</head><p>By the construction of the static simulator it is clear that the simulation is straight-line and black-box; further, since we consider a semi-honest adversary that in particular does not change the corrupted parties' inputs, "round zero" (the beginning of the protocol) can be set as the committal round. Therefore, we can use <ref type="bibr">[9]</ref> to derive the following corollary.</p><p>&#9654; Corollary 8. Protocol &#960; sum adaptively (n -1)-privately computes f sum with perfect semihonest security.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5">No Efficient Adaptive Simulator</head><p>We proceed to prove that an efficient adaptive simulator can be used to invert the one-way permutation.</p><p>&#9654; Lemma 9. Assume that OWP is a one-way permutation. Then, Protocol &#960; sum does not adaptively (n -1)-privately computes f sum with perfect semi-honest security and efficient simulation.</p><p>Proof. Let &#955; denote the security parameter, i.e., the one-way permutation is defined as OWP : {0, 1} &#955; &#8594; {0, 1} &#955; . For simplicity, assume that &#955; = (n -1) log(n + 1) for some n such that n + 1 is a power of 2 (this holds without loss of generality, since the security of the OWP holds for all sufficiently large &#955;'s). We consider the n-party functionality f sum that is defined with respect to the field F = F 2 &#8467; where &#8467; = log(n + 1); then, indeed, |F| &gt; n as required and OWP induces a permutation over F n-1 .</p><p>Defining the adversary and the environment. Consider the following adaptive, semi-honest, (n -1)-limited adversary A and the environment Z for &#960; sum .</p><p>1. The environment does not send any auxiliary information to the adversary at the beginning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2.</head><p>The adversary A starts by corrupting the parties P 3 , . . . , P n and learns their inputs (g 3 (x), . . . , g n (x)) and random tapes (&#961; 3 , . . . , &#961; n ). The environment does not send any auxiliary information to the adversary for these corruptions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.</head><p>Next, A receives first-round messages &#947; 1&#8594;3 , . . . , &#947; 1&#8594;n from P 1 and &#947; 2&#8594;3 , . . . , &#947; 2&#8594;n from P 2 . 4. The adversary completes the execution honestly and outputs its view. 5. The environment corrupts P 1 in the PEC phase and learns (amongst other things) its input g 1 (x), randomness &#961; 1 , and the values (r 1,1 , . . . , r 1,n-1 ) computed in Step 1 of &#960; sum . 6. The environment checks whether (r 1,1 , . . . , r 1,n-1 ) = OWP(&#961; 1 ); if so it outputs real and otherwise ideal. The corresponding adaptive simulator. By the assumed security of &#960; sum , there exists an efficient adaptive simulator S for A and Z. Note that by construction, the adversary A runs in polynomial time with respect to the parameter &#955;; hence, S is PPT with respect to &#955;. We will use S to construct a PPT inverter D for the OWP.</p><p>Constructing the inverter from the simulator. The inverter D receives as input the challenge y * &#8712; {0, 1} &#955; . We will consider y * as element of F n-1 , i.e., y * = (r * 1 , . . . , r * n-1 ) &#8712; F n-1 . The inverter D proceeds as follows: 1. D invokes the simulator S and emulates the ideal computation of f sum toward S; initially, D sends nothing as the auxiliary information to S. 2. When S corrupts the parties P 3 , . . . , P n in the emulated ideal computation, D chooses arbitrary polynomials (g 3 (x), . . . , g n (x)) of degree at most n -1 over F and hands g i (x) to S as the input value of P i ; D sends nothing as the auxiliary information to S. 3. When S sends inputs to the trusted party on behalf of the corrupted parties, D responds with an arbitrary output value y &#8712; F. 4. Once S generates its output, containing the view of A, the inverter D extracts the firstround messages that each corrupted party received from the honest party P 1 , denoted &#947; 1&#8594;3 , . . . , &#947; 1&#8594;n . 5. D samples two field elements &#947; 1 , &#947; 2 &#8592; F and interpolates the unique polynomial h of degree n -1 satisfying the constraints:</p><p>Next, D computes g 1 (x) = h(x) -n-1 k=1 r * k x k . 6. D sends a "corrupt P 1 " request to S during the PEC phase. When S corrupts P 1 in the emulated ideal computation, D sets the input of P 1 to be g 1 (x). Next, S responds with the view of P 1 , containing the content of its random tape &#961; 1 . 7. D outputs &#961; 1 .</p><p>Efficiently inverting the OWP. First, notice that by construction, the running time of the inverter D is polynomial with respect to its input y * and to the running time of S; therefore, D is PPT with respect to the parameter &#955;. We proceed to show that the success probability of D in inverting a random challenge y * &#8592; {0, 1} &#955; is non-negligible. Proof. In our proof, we do not assume any specific behavior of the adaptive simulator S (i.e., we do not know how the simulator generates its output messages). However, based on the assumed perfect security of the protocol, the adaptive simulator S operates according to the following interface: 1. S corrupts parties P 3 , . . . , P n and learns their inputs (g 3 (x), . . . , g n (x)). 2. S sends the inputs (g 3 (x), . . . , g n (x)) to the trusted party and obtains the output value y.</p><p>3. S generates the simulated view of the adversary, which in particular contains the messages &#947; 1&#8594;3 , . . . , &#947; 1&#8594;n from P 1 to parties P 3 , . . . , P n . 4. S corrupts party P 1 and learns its input g 1 (x). 5. S outputs the view of P 1 , including its random tape &#961; 1 and values (r 1,1 , . . . , r 1,n-1 ) = OWP(&#961; 1 ), such that the polynomial h 1 (x) = g 1 (x) + n-1 k=1 r 1,k x k satisfies h 1 (&#945; i ) = &#947; 1&#8594;i for i &#8712; {3, . . . , n}.</p><p>Recall that in Step 5 of the construction of the inverter, D samples &#947; 1 , &#947; 2 &#8592; F and interpolates the polynomial h(x) that satisfies the following constraints: h(&#945; 1 ) = &#947; 1 , h(&#945; 2 ) = &#947; 2 , h(&#945; i ) = &#947; 1&#8594;i for i &#8712; {3, . . . , n} .</p><p>First, for every choice of &#947; 1 , &#947; 2 &#8712; F there exists a unique polynomial of degree n -1 over F satisfying these constraints. In the other direction, every polynomial h &#8242; (x) of degree at most n -1 over F, satisfying h &#8242; (&#945; i ) = &#947; 1&#8594;i for i &#8712; {3, . . . , n}, induces two points &#947; 1 = h &#8242; (&#945; 1 ) and &#947; 2 = h &#8242; (&#945; 2 ) in F.</p><p>It follows that the inverter succeeds in inverting y * if and only if h 1 (&#945; 1 ) = &#947; 1 and h 1 (&#945; 2 ) = &#947; 2 (where h 1 (x) is the polynomial defined by S). Indeed, in this case h 1 (x) = h(x), which means that h 1 (x) -g 1 (x) = h(x) -g 1 (x), i.e., n-1 k=1 r 1,k x k = n-1 k=1 r * k x k , and finally (r 1,1 , . . . , r 1,n-1 ) = (r * 1 , . . . , r * n-1 ). Since &#947; 1 and &#947; 2 are sampled uniformly at random from F, this happens with probability 1/|F| 2 . &#9665; Finally, note that y * &#8712; F n-1 , i.e., y * &#8712; {0, 1} (n-1) log(n+1) = {0, 1} &#955; . However, the inverting probability is 1/|F| 2 = 1/2 2 log(n+1) = 1/(n + 1) 2 , which is non-negligible in &#955;. We conclude that D is PPT in &#955; and succeeds in inverting OWP with non-negligible probability. This contradicts the assumption that OWP is a one-way permutation.</p><p>&#9664;</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0"><p>A simulator is called straight-line if it does not rewind the adversary, and is called black-box if it does not rely on the code of the adversary.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1"><p>We note that Backes et al.<ref type="bibr">[3]</ref> showed that this transformation no longer holds if the simulator is not straight-line.</p></note>
		</body>
		</text>
</TEI>
