<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Threshold Signatures in the Multiverse</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>2023</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10422310</idno>
					<idno type="doi"></idno>
					<title level='j'>2023 IEEE Symposium on Security and Privacy (SP)</title>
<idno></idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Leemon Baird</author><author>Sanjam Garg</author><author>Abhishek Jain</author><author>Pratyay Mukherjee</author><author>Rohit Sinha</author><author>Minguan Wang</author><author>Yinuo Zhang</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[We introduce a new notion of multiverse threshold signatures (MTS). In an MTS scheme, multiple universes -each defined by a set of (possibly overlapping) signers, their weights, and a specific security thresholdcan co-exist. A universe can be (adaptively) created via a non-interactive asynchronous setup. Crucially, each party in the multiverse holds constant-sized keys and releases compact signatures with size and computation time both independent of the number of universes. Given sufficient partial signatures over a message from the members of a specific universe, an aggregator can produce a short aggregate signature relative to that universe.We construct an MTS scheme building on BLS signatures. Our scheme is practical, and can be used to reduce bandwidth complexity and computational costs in decentralized oracle networks. As an example data point, consider a multiverse containing 2000 nodes and 100 universes (parameters inspired by Chainlink's use in the wild), each of which contains arbitrarily large subsets of nodes and arbitrary thresholds. Each node computes and outputs 1 group element as its partial signature; the aggregator performs under 0.7 seconds of work for each aggregate signature, and the final signature of size 192 bytes takes 6.4 ms (or 198K EVM gas units) to verify. For this setting, prior approaches, when used to construct MTS, yield schemes that have one of the following drawbacks: (i) partial signatures that are 48× larger, (ii) have aggregation times 311× worse, or (iii) have signature size 39× and verification gas costs 3.38× larger. We also provide an opensource implementation and a detailed evaluation.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>A threshold signature scheme <ref type="bibr">[25]</ref>, <ref type="bibr">[26]</ref> allows for distributing a secret signing key among multiple parties such that each party can (non-interactively) generate a partial signature over any message m using its key share. 2 Given sufficiently many partial signatures, an un-trusted aggregator can combine them into a compact signature attesting that a threshold number of signers signed m. Threshold signatures have seen widespread use in recent years, especially within the blockchain ecosystem <ref type="bibr">[43]</ref>. Furthermore, efforts to standardize threshold cryptosystems have already begun <ref type="bibr">[39]</ref>.</p><p>Threshold signatures have traditionally been studied in a static setting where the signers and the threshold are fixed and all verifiers have the same belief (i.e., trust) in the signers. However, as we discuss shortly, emerging applications in blockchains involve verifiers who do not necessarily share the same beliefs. In particular, each verifier might live in its own universe, where it trusts only a specific subset of signers and wishes to choose its own security threshold. We ask whether it is possible to design threshold signature schemes where multiple such universes can co-exist.</p><p>A na&#239;ve approach to handle this scenario involves simply executing a fresh instance of a threshold signature scheme for each universe. This, however, leads to highly impractical solutions even for modest choices of parameters (see discussion later in this section). The main goal of our work is to address this scalability challenge.</p><p>Multiverse Threshold Signatures. We introduce a new notion of multiverse threshold signatures (MTS), where at any time, a verifier can define a new universe containing any subset of the parties present in the system. The multiverse is the set of all such (possibly overlapping) universes with possibly different security thresholds. A party signs a message irrespective of what (or how many) universes it is in, and an aggregator can take a threshold number of partial signatures corresponding to a universe and produce a short aggregate signature (independent of the size of the universe) that can be verified under the specific universe's verification key.</p><p>An MTS scheme must satisfy the following properties:</p><p>1) non-interactive setup: a universe involving any set of parties can be setup via a non-interactive protocol. 2) compact keys: each party's state is oblivious to the universes that it belongs to. In particular, each party's key material (and state) is independent of the number of universes that it participates in. 3) compact partial signatures: each party's partial sig-nature is compact, i.e., it is independent of the number of universes the party belongs to. In particular, its partial signature should be reusable for computing aggregate signatures across different universes. 4) fast aggregation and verification: it should take constant time to verify an aggregated signature for any universe. In the best case, aggregation will be linear in the number of partial signatures -we require that this aggregation be concretely efficient.</p><p>The security requirements are similar to that of standard threshold signatures. Namely, we require that an aggregate signature associated with a universe can only be verified when a threshold number of parties in that universe have signed the corresponding message.</p><p>Our multiverse model is inspired by an important scaling issue that arises in oracle networks for smart contracts, which rely heavily on threshold signatures.</p><p>Application: Decentralized Oracle Networks. Oracles enable smart contracts to perform transactions based on off-chain data, such as issuing DeFi transactions based on the exchange rate of tokens or automatically protecting user funds during an undercollateralization event (e.g., unforeseen fractional reserve practices from offchain custodians <ref type="bibr">[1]</ref>). In Chainlink <ref type="bibr">[19]</ref>, <ref type="bibr">[27]</ref>, <ref type="foot">3</ref> whenever the data feed's value (e.g., MKR/ETH exchange rate) fluctuates beyond a limit, the oracle nodes collectively agree on a new value to submit on-chain along with a threshold signature that is verified by the smart contract.</p><p>Historically, smart contract authors have shared the same data feed, whenever possible, typically to offset high gas costs on platforms such as Ethereum. However, with substantially higher throughput and lower fees on the next generation of smart contract platforms, and with the increased set of applications with diverse security and cost profiles, the one-size-fits-all approach is no longer adequate. For instance, contracts that perform autonomous real-time auditing of collateral prefer the (proof-of-reserve <ref type="bibr">[1]</ref>) data feed to be fulfilled by a set of highly reputed and rigorously audited oracle nodes <ref type="bibr">[6]</ref>; such contracts may also choose higher thresholds for the signature. On the opposite end of the spectrum, contracts with lower security needs (e.g., sports betting) may opt for lower-cost oracle nodes and latency-sensitive applications may choose highly available nodes, or opt for lower security thresholds for the signatures. Indeed, Chainlink's users already have the appropriate knobs -i.e., choice of oracle nodes (with reputation scores), signing threshold, etc. This scenario naturally maps to our multiverse setting since each separate preference can be modeled as a separate universe with a specific subset of all nodes and a specific threshold.</p><p>As signature verification is done by a smart contract, it places additional efficiency requirements for MTS. In particular, we need the verification EVM gas costs to be small. Drawbacks of existing approaches. While the notion of MTS is new to this paper, we observe the drawbacks of existing approaches in realizing MTS. As discussed earlier, a na&#239;ve approach for the multiverse setting can be obtained by implementing an independent copy of a threshold signature scheme for every universe. It is not hard to see, however, that such a solution quickly runs into scalability issues. A node that belongs to n different universes would need to use n different signing keys to sign the same message n-many times, generating n-many partial signatures, which are then broadcast to the aggregator. Hence, the state size, signature size, and computation all scale linearly with n. Consider some data points based on the metrics reported by ChainLink. For example, even with 20 universes, and each universe spanning an average of 500 (active) nodes who will sign the message, an aggregator receives 0.46 MB worth of partial signatures, for a single message (data feed), or more challengingly, nearly 1 GB traffic for over 2000 data feeds served by the Chainlink network <ref type="foot">4</ref> . This is for a single message! This approach quickly exhausts bandwidth on a peer-to-peer network. The problem becomes completely unmanageable in the weighted setting, <ref type="foot">5</ref> as both the network traffic and the signer's computation are linear in the weights.</p><p>To avoid such scaling issues, one can devise an alternative solution by using succinct non-interactive arguments of knowledge (SNARK) <ref type="bibr">[11]</ref>, <ref type="bibr">[12]</ref>. The highlevel idea is to design an aggregator who computes a SNARK attesting to having witnessed a threshold number of signatures over a message. This approach, however, yields prohibitive aggregation costs. Even using state-of-the-art SNARKs (such as PLONK <ref type="bibr">[29]</ref>) and the most efficient signature schemes such as EdDSA <ref type="bibr">[10]</ref> results in aggregation time in the order of a few minutes for modest choices of parameters (see Section VI). Moreover, even such performance numbers are only possible when we instantiate EdDSA with SNARK-friendly hash functions such as MiMc <ref type="bibr">[7]</ref>, whose security is not well-understood. Finally, we note that this approach inherently makes non-black-box use of the hash function and therefore does not yield a security proof in the Random Oracle Model.</p><p>We also consider the Micali et. al.'s <ref type="bibr">[38]</ref> SNARK approach specialized to the signature setting for obtaining smaller aggregation times. However, this reduction in aggregation time is at the cost of a larger signature size and an increase in the verification gas cost, which is undesirable for our application.</p><p>Finally, one may wonder if multisignatures <ref type="bibr">[14]</ref>, <ref type="bibr">[37]</ref> can be used to construct multiverse threshold signatures. In particular, every party samples its own public key/secret key pair; the public key of each universe would be the public keys of all the users in the universe. To aggregate partial signatures, the aggregator generates a multisignature<ref type="foot">foot_3</ref> on all the partial signatures to certify that enough parties from the universe sign the message. The apparent advantages of this approach are that: 1) a party's private key is oblivious to the universes that it belongs to; 2) there is no setup phase; and, 3) in the weighted setting, the aggregator's computation is independent of weights, providing off-chain scalability in the case of oracle networks. However, the multisig approach has prohibitive performance and cost concerns on the verification side. For instance, the aggregated signature itself needs to contain the information regarding which set of parties have signed; thus, the signature size is linear in the universe size. On that note, the universe's public verification key and the verification time also grow linearly in the size of the universe. In the case of oracle networks, these drawbacks make the smart contract prohibitively expensive; on Ethereum, it takes up to 60 million gas to set up the smart contract for 2000 signers, which is prohibitive. We provide a cost analysis for the multisignature-based approach (on Ethereum smart contracts) in Section VI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Advantages of MTS.</head><p>We now describe how the efficiency features of MTS can help address the shortcomings of existing approaches. Recall that while selecting a data feed, a smart contract specifies an access structure, which includes the set of oracle nodes, their weights, and a threshold. Hence, a data feed maps to a universe, wherein the highly reputed oracle nodes would typically be members of a large number of universes.</p><p>The compact partial signatures property of MTS addresses the bandwidth concern: the bandwidth reduces from 1 GB to 1 MB for the data point discussed above. Second, regardless of the node's weight or the number of smart contracts (universes), each node must only keep a compact key of size the security parameter &#954; = 128 bits, and does not require any information about its universes during signing. Finally, due to the non-interactive asynchronous setup property, nodes can go offline without being penalized (though economic incentives encourage honest participation); that is, a node can come online and participate in a universe's creation independently of other nodes. Compare this to the setup for BLS, where parties engage in an interactive DKG protocol <ref type="bibr">[24]</ref>, <ref type="bibr">[33]</ref>. The signing phase is identical to BLS, so parties operate non-interactively, and a threshold number of correct participants is sufficient to construct an aggregate signature. We stress that having a compact state and a non-interactive setup not only makes the system efficient, but also greatly simplifies its design.</p><p>Our Contribution. The contribution of this work is two-fold: we first present formal definitions of multiverse threshold signatures. Second, we give the first construction of an MTS scheme building on the BLS signature. We prove that our scheme satisfies existential unforgeability based on the knowledge of exponent assumption <ref type="bibr">[23]</ref>. We provide an open-source implementation of our scheme at <ref type="url">http://github.com/rsinha/mts</ref>. We then present a detailed evaluation of our MTS scheme, where we show that it is practical for application to decentralized oracle networks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. TECHNICAL OVERVIEW</head><p>In this section, we give a high-level overview of our construction of the BLS-based <ref type="bibr">[16]</ref> multiverse threshold signature scheme. For simplicity, the presentation in this section only considers the unweighted setting, but it naturally extends to the weighted setting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. The Multiverse Model</head><p>A multiverse is a possibly overlapping set of universes each containing an arbitrary subset of all parties {P 1 , P 2 , . . .} in the system and a specific threshold. When a new party P i enters the system, it chooses a private key sk i and publishes the corresponding public key pk i . Then, the online parties engage in a separate setup phase for each universe, where each party independently contributes randomness to the public parameters unique to that universe. Once a universe is set up, it can be used to produce threshold signatures. During signing, each party P i uses its private key sk i to sign a message m, and the partial signature is broadcasted or sent to an aggregator -this step is totally agnostic of any universe. For each universe, there is a separate aggregating procedure which combines the partial signatures for that particular universe. Crucially, the partial signatures are "reusable" so that the same partial signature can be used in an unlimited number of aggregation procedures across multiple universes to produce many signatures on the same message. Each aggregated signature verifies only with respect to a specific universe.</p><p>The setup phase can happen concurrently with other setups or signing phases. Moreover, both the setup and signing phases are fully asynchronous and noninteractive, and they require honest participation from only a threshold (say, T ) number of parties.</p><p>We emphasize that, in our model, we do not assume the aggregator to be honest for unforgeability.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Construction</head><p>Background on Threshold BLS. We start by recalling the standard threshold BLS signature scheme. Let G be a group with a prime order p and a generator g where the standard pairing-based assumption holds. Let H be a cryptographic hash function that maps messages to group elements. In the threshold BLS signature scheme, there is a setup phase, in that a random secret key sk &#8592; F p is sampled and shared among the parties using Shamir secret sharing for a threshold T ; the public key is set to be pk = g sk . Let sk i be party-i's share. To ensure complete decentralization the setup is implemented by an interactive distributed key generation (DKG) protocol such as <ref type="bibr">[33]</ref>. To sign a message m, party i uses sk i to compute H(m) ski as its (partial) signature. Given at least T signatures {H(m) ski } i , one can simply use Lagrange interpolation to compute the aggregated threshold signature H(m) sk , which verifies as e(pk, H(m)) = e(g, H(m) sk ).</p><p>Extending Threshold BLS to the Multiverse. To extend this scheme from a single universe to a multiverse setting, one encounters several roadblocks. First, a na&#239;ve approach is to simply repeat the threshold BLS for every universe. That is, to create a new universe, involving parties will engage in a fresh instance of the DKG protocol to generate new secret shares of this new universe. As mentioned before, this runs into a scalability issue as for each party the size of the secret key, the computation, and the corresponding signatures all grow linearly with the number of universes. Recall that, we need our signatures to be generated independently of any universe. However, it appears hard to achieve if the setup phase generates correlated shares of the secret-keys -this makes the individual secret-key shares compatible only with one public-key and hence one universe. Therefore we change the setup phase in a way, which ensures that the individual secret-keys are compatible with different public-keys.</p><p>Our Approach. We let each party pick its own secret key independently and publish the corresponding public-key. Then each universe combines a different combination of the public-keys to compute a universespecific public-key. In more detail, suppose a universe consists of parties P 1 , P 2 , . . . , P N with a threshold T . Then each party P i chooses an independent secret-key sk i &#8592; F p and publishes the public-key pk i = g ski . Note that the secret keys sk 1 , . . . , sk N implicitly defines a degree-(N -1) polynomial f (hidden from all parties) where f (i) = sk i . Moreover, any evaluation of this polynomial, and in particular the verification key pk = g f (0) , can be constructed via Lagrange interpolation in the exponent. Now, we notice that if N -T points on this polynomial f are somehow made public, this effectively reduces the degree of the polynomial f from N -1 to T -1. For instance, suppose f (-1), f (-2), . . . , f (-(N -T )) are publicly known. Then, given any T signatures {H(m) ski } i , an aggregator can compute H(m) f (-1) , . . . , H(m) f (-(N -T )) locally and subsequently is able to compute the aggregated signatures &#963; = H(m) f (0) . Although this approach works for a single universe, it breaks down when used directly in a multiverse setting. The crux of the problem stems from the fact that since the evaluations for different polynomials (corresponding to different universes) collide, it ends up revealing too many linear equations. Let us elaborate with an example. Consider four parties among which P 1 and P 2 are honest, but P 3 and P 4 are controlled by the adversary. Now consider a universe U that consists of P 1 , P 2 , P 3 and has a threshold 2, which means the evaluation f U (-1) is public where f U is the secret degree-2 polynomial corresponding to universe U . Again, assume another universe U &#8242; consisting of P 1 , P 2 and P 4 with threshold 2 -this implies the point f U &#8242; (-1) of polynomial f U &#8242; is also public. Now note that, since f U (-1) and f U &#8242; (-1) each reveal one linear relation about the honest party's secret keys sk 1 and sk 2 , the malicious parties could potentially use them to reconstruct sk 1 and sk 2 entirely! <ref type="foot">7</ref>To resolve this issue in the multiverse setting, we refrain from publishing additional points explicitly to reduce the threshold, but instead change how verification works and use the additional points in the exponent (which, as mentioned above, everyone can compute from the public keys). Without loss of generality, suppose that P 1 , P 2 , . . . , P T sign a message m. Each party P i publishes H(m) f (i) as a partial signature. Additionally, one can compute the public points g f (j) for j &#8712; {-1, -2, . . .}. The challenge is now to use these together to ensure verification as well as unforgeability.</p><p>To see this, we first express the secret key as follows:</p><p>, where the &#955; i 's are the appropriate Lagrange coefficients. For the ease of notation, let us write f (0) = f pub (0) + f par (0), where f pub (0) stands for the public points part and f par (0) stands for the partial signature part. We observe that</p><p>Therefore, the verification procedure could check</p><p>where the aggregator computes the threshold signature &#963; par = H(m) f par (0) from the partial signatures and the verifier later computes &#963; pub = g f pub (0) from the public points. Note that, both these computations require the knowledge of participating sets (as Langrage coefficients depend on that information). This becomes a problem for the verifier, as by definition threshold signature verification should be agnostic about the participating sets. Furthermore, the verification procedure needs to perform O(N ) amount of work to compute &#963; pub incurring significant computation cost.</p><p>A potential way to resolve this issue could be to ask the aggregator to also compute &#963; pub . The verifier would then verify the aggregated signature (&#963; pub , &#963; par ) by checking Equation 1. However, one can see that this construction is not secure. Note that for any &#945; &#8712; F p ,</p><p>is a signature that will pass Equation 1 violating unforgeability completely. This attack is due to the fact that &#963; pub is not the correct aggregation of the public points. Rather, the adversary intentionally picks &#963; pub such that it knows the exponent of pk/&#963; pub , namely, &#945;.</p><p>Our construction handles this issue via a different route. During the setup of a universe, we ask the parties to engage in a one-round protocol to collectively raise the public points to some random power k &#8712; F p . In particular, we ask party P i to pick a random k i &#8592; F p and send</p><p>The final public parameter pp of the universe shall be</p><p>where k = i k i . <ref type="foot">8</ref> An aggregator now additionally computes &#963; pub 1 that needs to pass a second verification equation:</p><p>e(&#963; pub , g k ) = e(&#963; pub 1 , g).</p><p>(</p><p>This check ensures that as long as &#963; pub is a linear combination of the public points, one can compute &#963; pub 1 by using the same linear combination on the public parameter. Alternatively, if &#963; pub is not a linear combination of the public points, one can not find the unique &#963; pub 1 that passes Equation 2 except with negligible probability. Unforgeability follows from the fact that as long as &#963; pub is a linear combination of the public points, the adversary never knows the exponent of pk/&#963; pub and without knowing that one can not forge a signature. Hence, we prove existential unforgeability assuming a variant of the widely used knowledge of exponent assumption <ref type="bibr">[23]</ref>.</p><p>The above modification leads to a one-round setup phase. However, this also requires a stateful aggregator, because the aggregator needs to use the entire public parameter pp during aggregation and, hence, needs to maintain it in its local state.</p><p>Handling Offline Parties. One salient feature about the setup phase is that it does not require all parties to be online. In particular, the security only relies on the presence of one honest party who contributes a random k i to the random power k. Therefore, with a corruption threshold of T -1, the setup of a universe is successful as long as &#10878; T parties participate in the setup phase.</p><p>Extension to the weighted setting. One can extend our scheme to the weighted setting via the standard virtualization approach. That is, if party P i has weight W i , it participates in the protocol as W i (virtual) parties, by computing W i partial signatures. To that end, P i persists a &#954;-bit secret key, and uses PRG expansion to derive W i secret values. Crucially, we also allow a party to have different weights in different universes. A subtle issue here is that we want the signing step to be agnostic to any universe. Hence, if a party has different weights in different universes, it is unclear how many partial signatures it should compute. Motivated by our applications, we assume that there is an upper bound B on the maximum weights. Consequently, each party derives B keys from the &#954;-bit secret, and releases B partial signatures on each message.</p><p>Strong Unforgeability. An astute reader may find that the signatures that will pass our verification algorithms are not unique. Indeed, suppose &gt; T number of parties have signed the message; there are multiple ways to aggregate the signature. Namely, the aggregator could use any subset (with size &#10878; T ) of partial signatures to compute the aggregated signature. Hence, if there are enough partial signatures, one may find exponentially many correctly aggregated signatures. Consequently, it is unclear how one defines strong unforgeability for our scheme. Achieving strong unforgeability without harming efficiency is a fascinating open problem.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. PRELIMINARIES</head><p>Let N be the set of all natural numbers {1, 2, . . .} For any integer n, [n] refers to the set of integers {1, 2, . . . , n}. For any integers n &lt; m, [n : m] refers to the set {n + 1, . . . , m}. For any vector &#8407; v, we use &#8407; v[i] to denote the ith coordinate of &#8407; v and &#8407; v[: i] to denote the slicing up to the ith element.</p><p>We use &#954; as the security parameter. A function f (&#954;) is negligible if for all polynomial p(&#954;), f (&#954;) &lt; 1/p(&#954;) for all large enough &#954;.</p><p>For any vectors &#8407; a, &#8407; b over a field F p , we use &#10216;&#8407; a, &#8407; b&#10217; to denote vector inner product. We use g &#8407; a to denote the vector g a1 , g a2 , . . . , g a k , where &#8407; a = (a 1 , . . . , a k ). Similarly, if &#8407; a &#8712; G k is a vector over a group G of order p and &#8407; b &#8712; F k p , &#10216;&#8407; a, &#8407; b&#10217; denote vector inner product over the exponents. That is,</p><p>A machine is probabilistic polynomial time (PPT) if it is a probabilistic algorithm that runs in time poly(&#954;).</p><p>Definition 1 (co-CDH Assumption). A pairing group (G 1 , G 2 ) with generator g 1 , g 2 and a bilinear pairing e : G 1 &#215; G 2 &#8594; G T satisfies the co-CDH assumption if, for all PPT adversary A, it holds that</p><p>. Definition 2 (Knowledge Assumption). With respect to (1) groups G 1 , G 2 of prime order p and its generator g 1 , g 2 and (2) a random oracle H : {0, 1} * &#8594; G 2 , the knowledge assumption states that for all polynomial N = poly(&#954;) and stateful PPT adversary A = (A 1 , A 2 ), there exists a knowledge extractor E such that, for all s, r &#8712; F and h 1 , . . . , h N &#8712; G 1 , we have</p><p>where both A and E takes g 1 , g 2 , g s 1 , g s 2 , g r 2 as public input.</p><p>Our knowledge assumption is similar to the knowledge-of-exponent (KEA) assumption <ref type="bibr">[9]</ref>, <ref type="bibr">[23]</ref> except that we consider an oracle-aided adversary A, which has access to a random oracle H. Definition 3 (BLS Signature <ref type="bibr">[16]</ref>). The BLS signature scheme consists of the following tuple of algorithms (KGen, Sign, Verify): Let e(G 1 , G 2 ) &#8594; G T be a nondegenerate, efficiently computable, bilinear pairing between G 1 , G 2 of prime order p and target group G T . Let g 1 be the generator of G 1 . Furthermore let H : {0, 1} * &#8594; G 2 be a random oracle.</p><p>&#8226; KGen(1 &#954; ) : Sample s &#8592; F p and set verification key as VK = g s 1 and the signing key as sk = s.</p><p>Otherwise, output 0.</p><p>Boneh et. al. <ref type="bibr">[16]</ref> proved the strong unforgeability property assuming the co-CDH assumption.</p><p>Our construction also uses the following noninteractive zero-knowledge proof of knowledge for the discrete log problem. Definition 4 (NIZKPoK for DL). A NIZKPoK of discrete log problem consists of a tuple of algorithms (Gen, P, V, S, E) that satisfies the following guarantees.</p><p>&#8226; Correctness. For all x &#8712; F p , Pr V(crs, g x , &#960;) = 1 :</p><p>crs &#8592; Gen(1 &#954; ) &#960; &#8592; P(crs, g x , x) = 1.</p><p>&#8226; Zero-knowledge. There exists a simulator S such that for all g x , the output of S(g x ) is indistinguishable from the output of P(crs, g x , x). &#8226; Simulation Extraction. For all PPT A and x &#8712; F p ,</p><p>Construction of NIZKPoK for DL. One can construct a NIZKPoK for DL based on Schnorr's protocol.</p><p>Schnorr NIZKPoK for DL:</p><p>&#8226; Gen(1 &#954; ): Sample random oracle crs = H.</p><p>&#8226; P(crs, g x , x) : Sample r &#8592; F p . Compute c = H(g, g x , g r ) and z = r+c&#8226;x. Output (g r , c, z).</p><p>Correctness is trivial. The simulation for zero knowledge is straightforward by programming the random oracle. The simulation extraction property follows from the forking lemma <ref type="bibr">[41]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. MULTIVERSE THRESHOLD SIGNATURE SCHEME</head><p>In this section, we present the formal definition for a Multiverse Threshold Signature Scheme. We write {P 1 , P 2 , . . .} for all the parties in the system. We write {W 1 , W 2 , . . .} for their corresponding weights where without loss of generality we assume that W i &#8712; N.</p><p>Definition 5 (Weighted Threshold Access Structure). An access structure &#923; over a set U &#8838; {P 1 , P 2 , . . .} of parties is a weighted threshold access structure with a threshold T if it associates a weight with each party in this particular set; and any subset S &#8838; U is called authorized (which we denote as S &#8712; &#923;) if and only if i&#8712;S W i &#10878; T . Definition 6 (Universe). A universe (U, &#923;) is specified by an arbitrary subset U of all parties {P 1 , P 2 . . .} and an access structure &#923; over U .</p><p>We also consider the standard notion of (unweighted) threshold access structure, which is a special case where W i = 1 for each i in each universe.</p><p>Definition 7 (Multiverse Signature Scheme). A multiverse signature scheme is a tuple of PPT algorithms denoted by &#931; = (KGen, UGen on , UGen off , Sign, Aggregate, Verify) defined as follows:</p><p>&#8226; (pk, sk) &#8592; KGen(1 &#954; ): On input the security parameter 1 &#954; , output a public-secret key pair (pk, sk). During the online phase, every party samples one message &#961; &#8592; UGen on using its private key and private randomness. In the offline phase, the verification key and the public parameter of the universe can be computed via the deterministic function UGen off using all parties' messages. Notably, one party can reuse the same secret key for the setup of multiple universes without storing any additional information in its secret state.</p><p>To formalize the security of our scheme, we define the following oracles (OKGen, OCorrupt, OSign, OUGen), which allows the adversary to interact with the chal-lenger. These oracles <ref type="foot">9</ref> allow the adversary to add new honest parties to the system, set up any universes, corrupt parties, and request partial signatures in an arbitrary interleaved manner. The challenger maintains a state which includes the following:</p><p>&#8226; H is the set of honest parties;</p><p>&#8226; M is the set of malicious parties;</p><p>&#8226; L is the set of universes that have already been setup; &#8226; T m is the set of honest parties which have already signed the message m; &#8226; sk i is the secret key of an honest party P i ;</p><p>&#8226; &#969; (U,&#923;),i is the randomness of P i used during setup of the universe (U, &#923;); &#8226; &#961; (U,&#923;) , the messages of honest parties during setup of the universe (U, &#923;). All the sets are all initialized as &#8709; and all the variables are initialized as &#8869;.</p><p>OKGen state (P i )</p><p>OCorrupt state (P i )</p><p>2) If P i &#8712; H then update H = H\{P i }, and output sk i . Otherwise, output &#8869;.</p><p>OSign state (P i , m)</p><p>OUGen state (U, &#923;)</p><p>Definition 8 (Correctness). A multiverse signature scheme &#931; satisfies correctness if for any PPT adversary <ref type="foot">10</ref>A with oracle access to OKGen, OCorrupt, OSign, and OUGen, the output of the Correctness -Game A (1 &#954; ) defined in Figure <ref type="figure">1</ref> is 1 with probability at least 1-negl(&#954;).</p><p>1) The adversary outputs a universe (U, &#923;) and the messages of the corrupted parties in that universe. Also, it outputs a message m and a set of partial signatures for parties in set S:</p><p>2) The verification key and public parameter are computed as follows:</p><p>(VK, pp) = UGen off ((U, &#923;), &#961; (U,&#923;) &#8741;{&#961; i } Pi&#8712;M &#8745;U ).</p><p>Among these let S &#8242; &#8838; S denotes the subset where all partial signatures {&#963; j } j&#8712;S &#8242; are honestly computed (hence, correct) by calling OSign(P j , m).</p><p>3) The output of this game is 1 if and only if at least one of the following conditions is met.</p><p>&#8226; All parties in the universe are corrupted at setup. That is, &#961; (&#923;,U ) = &#8709;. &#8226; The subset of correct signatures is not an authorized subset. That is, S &#8242; / &#8712; &#923;. &#8226; The aggregated signature verifies. That is, Verify(VK, &#963;, m) = 1, where &#963; = Aggregate((U, &#923;), pp, {&#963; j } Pj &#8712;S ).</p><p>Fig. <ref type="figure">1</ref>:</p><p>Definition 9 (Security). A multiverse signature scheme &#931; satisfies security if for any PPT adversary A with oracle access to OKGen, OCorrupt, OSign, and OUGen, the output of the following Forgery-Game A (1 &#954; ) defined in Figure <ref type="figure">2</ref> is 1 with at most negl(&#954;) probability. <ref type="foot">11</ref>1) The adversary outputs a universe (U, &#923;) and the UGen on message of the corrupted parties in that universe. Additionally, it outputs a message m and a signature &#963;. That is,</p><p>2) The verification key and public parameter are computed. That is, (VK, pp) = UGen off ((U, &#923;), &#961; (U,&#923;) &#8741;{&#961; i } Pi&#8712;M &#8745;U ).</p><p>3) The output of this game is 1 only if all of the following conditions are met.</p><p>&#8226; The adversary has not acquired sufficient partial signatures of m; i.e. M &#8746; T m / &#8712; &#923;.</p><p>&#8226; The signature verifies, i.e., Verify(VK, &#963;, m) = 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 2: Forgery</head><p>On our security notion and comparison to <ref type="bibr">[8]</ref>. In a recent work, Bellare et. al. <ref type="bibr">[8]</ref> investigate the subtleties in the security definitions of threshold signature schemes.</p><p>In particular they look into unforgeability which guarantees that the adversary should not be able to produce any "non-trivial forgery" except with negligible probability. <ref type="foot">12</ref>In particular, based on what makes a "trivial forgery", they define various levels of security. We note that our definition considers the strongest possible security as a forgery is only considered nontrivial as long as &lt; T -c honest parties sign the message (which is captured by M &#8746; T m / &#8712; &#923; in our definition). Besides achieving the strongest existential unforgeability notion, our definitions are stronger in the following aspects.</p><p>&#8226; We consider malicious security even for the universe setup phase together with the signing phase.</p><p>In most prior works, the security of setup phase is considered separately <ref type="bibr">[33]</ref>. &#8226; The multiverse setting is novel to our work. Hence, the adversary could interleave the universe setup and signing requests in an arbitrary manner. &#8226; Our correctness guarantee is also stronger in that even if some parties submit invalid signatures, the aggregate signature should still be correct with overwhelming probability as long as a sufficient amount of partial signatures are honestly generated. Finally, as we discussed earlier, our scheme is weaker in that we do not achieve strong unforgeability.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>V. THE PROTOCOL A. Multiverse Threshold Signature Protocol for Weighted Threshold Access Structure</head><p>In this section, we present a multiverse signature protocol for weighted threshold access structure.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Notation and Building Blocks.</head><p>&#8226; Let G 1 , G 2 be prime-order groups with g 1 , g 2 as their respective generators. Let p be the order of G 1 and G 2 . Let e(G 1 , G 2 ) &#8594; G T be a bilinear map between G 1 , G 2 and the target group G T . &#8226; Let {P 1 , P 2 , . . . } be a list of all the parties in the system. We assume that there exists an upper bound B such that for any universe (U, &#923;),</p><p>Description of KGen(1 &#954; ) :</p><p>1) Sample a random PRG seed s &#8592; {0, 1} &#954; .</p><p>2) Output (pk = g</p><p>Description of UGen on ((U, &#923;), sk) :</p><p>i=1 , T . Assume parties are indexed by some canonical ordering. Let j be the index s.t. g</p><p>We shall reconstruct the degree-(W -1) polynomial defined by the W public keys of the parties in the universe, where party P i specifies W i points.</p><p>3) In particular, let pk * j = pk j [1 :</p><p>4) Interpolate these points on the exponents:</p><p>x &#8712; {-(W -T ), . . . , -1} .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>6) Compute NIZKPoK &#960;</head><p>DLog for each group element in pk j using witness F (sk).</p><p>Description of UGen off ((U, &#923;), &#961;) : 1) Repeat the same steps in UGen on and compute</p><p>verifies and e(evals 1,i , g 2 ) = e(evals 0,i , g ki 2 ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5) Let</head><p>and evals 1 be the (coordinate-wise) linear combination of all elements in {evals 1,i } i&#8712;Q with coefficients &#945; i . 6) Set pp = (evals 0 , evals 1 ), and set VK = (VK 0 = g f (0) 1</p><p>, VK 1 = i&#8712;Q (g ki 2 ) &#945;i ).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>7) Output (VK, pp).</head><p>Description of Sign(sk, m) :</p><p>Description of Aggregate((U, &#923;), pp, &#963;) :</p><p>i=1 , T . Assume parties are indexed by some canonical ordering.</p><p>2) Parse &#963; = {&#963; i } Pi&#8712;S . Initiate an empty set Q.</p><p>Sample a random vector &#8407; r &#8592; F B p . For each</p><p>3) Let X be the set of evaluation points corresponding to parties in Q. For instance, if</p><p>Note that &#955; i are determined by the set X. 5) We write {&#955; i } i&#8712;X in short as &#8407; &#955; par and</p><p>. Compute</p><p>7) Let &#8407; &#963; be the concatenation of &#963; i 's where</p><p>Description of Verify(VK, &#963;, m) :</p><p>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Correctness</head><p>We now show via the following theorem that our construction satisfies correctness according to Definition 8. Theorem 1. Let us assume that the NIZKPoK proof system for DLOG is complete with probability 1. Then our MTS scheme is correct with overwhelming probability.</p><p>Proof. In our proof, we use the same notations for all variables that appeared in our description of the scheme in Section V-A.</p><p>Suppose at the end of Correctness-Game, the adversary A outputs: ((U, &#923;), {&#961; i } Pi&#8712;M &#8745;U , m, {&#963; j } Pj &#8712;S ). Let S &#8242; &#8838; S be the subset of signatures that are honestly generated. We would like to show that with all but negligible probability, if (U &#824; &#8838; M ) &#8743; (S &#8242; &#8712; &#923;) Then Verify(VK, &#963;, m) = 1, where &#963; = Aggregate((U, &#923;), pp, {&#963; j } Pj &#8712;S ).</p><p>As U &#824; &#8838; M , there exists some P * &#8712; U &#8745; H. Therefore, &#961; * &#8712; &#961; &#923;,U is honestly computed by the oracle OUGen state (U, &#923;). Now consider running UGen off ((U, &#923;), &#961; &#923;,U &#8741;{&#961; i } Pi&#8712;M &#8745;U ): since &#961; * is honestly computed, we must have P * &#8712; Q and thus Q &#824; = &#8709;. Now for all P j &#8712; Q, since e(evals 1,j , g 2 ) = e(evals 0 , g kj 2 ), we have e(evals 1 , g 2 ) = e(evals 0 , VK 1 ). Therefore it always holds that e(&#963; &#8242; 1 , g 2 ) = e(&#10216;evals 1 , &#8407; &#955; par &#10217;, g 2 ) = e(&#10216;evals 0 , &#8407; &#955; par &#10217;, VK 1 ) = e(&#963; &#8242; 0 , VK 1 ). Thus verifier's first check will pass.</p><p>For any P j &#8712; S &#8242; , since &#963; j is the output of A's query to OSign(P j , m), we have &#963; j = H(m) F (skj ) and, thus, e(&#10216;pk j , &#8407; r&#10217;, H(m)) = e(g 1 , &#10216;&#963; j , &#8407; r&#10217;) for any &#8407; r. Therefore S &#8242; &#8838; Q.</p><p>We further claim that with all but negligible probability, for all P j &#8712; Q \ S &#8242; , we have &#963; j = H(m) F (skj ) . To prove this, notice that if</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>By a union bound,</head><p>which is negligible in &#954;. Since S &#8242; &#8712; &#923; and S &#8242; &#8838; Q, we have Q &#8712; &#923;, and thus Pi&#8712;Q W i = |X| &#10878; T . Since f (x) is a degree W -1 polynomial, it can be interpolated correctly using its evaluations at X &#8746; {T -W, . . . , -1}.</p><p>Thus Verify(VK, &#963;, m) = 1 with 1 -negl(&#954;) probability.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Security</head><p>We prove the security/unforgeability of our scheme according to Definition 9 via the following theorem.</p><p>Theorem 2. In addition to the knowledge assumption (Assumption 2), let us assume that the underlying NIZKPoK proof system for DLOG is sound with overwhelming probability. Then the security of our MTS construction can be reduced to the co-CDH assumption (Definition 1).</p><p>Due to the space limit, we shall only give an overview of the proof in the main body. The full security proof is deferred to Appendix A.</p><p>Proof Overview. Our security proof reduces the security of our MTS scheme to the security of the co-CDH assumption. In particular, given an adversary A that breaks the MTS scheme with non-negligible probability, we construct an adversary A * that breaks co-CDH with non-negligible probability.</p><p>On a high level, A * interacts with an external challenger for the co-CDH security game. It receives the challenge g s 1 , g s 2 , and g r 2 from the challenger and proceeds to interact with the adversary A. When it simulates the MTS security game, A * will embed the challenge g s 1 as the public key of some random honest parties. For the other honest parties, it samples their secret keys honestly. Additionally, A * will program the random oracle H and embed the challenge g r 2 as the hash H(m) of a few random messages. For the other messages, it honestly samples a random r &#8242; and sets H(m) = g r &#8242; 2 . In the end, we show that if A successfully produces a forgery for the MTS game, we could use it to compute g rs 2 with some non-negligible probability. Below, we highlight some technical challenges. Oracle queries. A * can answer most queries by A directly with a few exceptions. First, suppose A requests an honest party with embedded challenge g s 1 as its public key to sign a message m with the embedded challenge g r 2 as its hash value H(m). A * cannot compute the partial signature H(m) s as it does not know neither s nor r. However, by carefully choosing the embedding probability, we will ensure that this never happens with a non-negligible probability.</p><p>Second, since we allow A to adaptively corrupt parties, it may decide to corrupt an honest party with the embedded challenge g s as its public key. Again, we cannot give the secret state of this party as we do not know s. However, with a non-negligible probability, A will not corrupt any honest parties with an embedded challenge. This is because A succeeds with a nonnegligible probability in the MTS game, and A only succeeds if it does not corrupt all parties. Therefore, with a non-negligible probability, A does not corrupt any parties where we embed the challenges, as there are very few of them. Therefore A * can successfully answer all of A queries with a non-negligible probability. Solve co-CDH challenge. Conditioned on (1) A * can successfully answer all of A queries and (2) A indeed successfully generates a forgery, we now need to argue that we can solve the co-CDH problem and compute g sr 2 . Note that A produces a signature (&#963;, &#963; 0 , &#963; 1 ) such that e(VK 0 /&#963; 0 , H(m)) = e(g 1 , &#963;) e(&#963; 1 , g 2 ) = e(&#963; 0 , VK 1 )</p><p>Our plan is to show that with a non-negligible probability VK 0 /&#963; 0 can be written as g a&#8226;s+b 1 for some a &#824; = 0 and b that A * knows. If so, &#963; must be equal to H(m) a&#8226;s+b . And, consequently, if H(m) is a hash value with the embedded challenge g r 2 (which happens independently with a non-negligible probability), we can easily use &#963; to compute g sr 2 .</p><p>To argue this, we invoke the knowledge assumption 13  to argue that the only way that A can compute &#963; 0 and &#963; 1 that pass the second equation is by a linear combination of the public points, and one can extract such a linear combination. Consequently, one can write VK 0 /&#963; 0 directly as some linear combination of all parties' public keys. Furthermore, by relying on the knowledge extraction property of the NIZKPoK scheme, we can extract the malicious parties' secret keys. Henceforth, we know all the exponents of public keys in the linear combination, except for those with embedded challenges, which means VK 0 /&#963; 0 can indeed be written as g a&#8226;s+b 1 . Next, we note that a &#824; = 0 with high probability, as we pick a random honest party to embed the challenge; therefore, no matter what linear relation the adversary chooses to aggregate the public points, one can prove that a &#824; = 0 with high probability. Security Loss. Our proof shows that if the adversary A succeeds in the MTS game with some non-negligible probability &#948;, then A * succeeds in solving the co-CDH problem with some non-negligible probability &#948; &#8242; . We emphasize that our security proof does not optimize the security loss between &#948; &#8242; and &#948;. Note that this security loss affects the choices of security parameters and, henceforth, all parameters in our scheme, e.g., signature size, aggregation/verification time, etc. Therefore, tighter security loss will potentially lead to more efficient construction. We leave this as exciting future works.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VI. IMPLEMENTATION AND EVALUATION</head><p>We implement our MTS construction in Rust and release it open-source at <ref type="url">https://github.com/rsinha/mts</ref>. We use the BLS12-381 pairing-based curve <ref type="bibr">[15]</ref>, and the hashing to elliptic curve method defined in <ref type="bibr">[28]</ref>. For efficiency, we implement multi-exponentiation of group elements (within the aggregator) using Pippenger's method <ref type="bibr">[17]</ref>, <ref type="bibr">[40]</ref>, which, for n group elements, requires O(n / log(n)) running time as opposed to O(n).</p><p>For a fair comparison of all schemes, we only implement the single-threaded version of our algorithms, though there are obvious opportunities for parallelism. All experiments are run on a Macbook Pro with M1 Pro chip and 32 GB RAM. We also report EVM gas costs 14  for publishing and verifying signatures on-chain. 13 Note that A is an oracle-aided circuit that has access to the random oracle H. In particular, our knowledge assumption assumes that one could extract from such oracle-aided circuits. 14 Our calculation uses the pre-compiled gas costs for alt bn128 curve as defined in EIP-1108 <ref type="bibr">[22]</ref>: ECADD costs 150, ECMUL costs 6000, and k pairings cost 35, 000 &#8226; k + 45, 000. The gas cost for each 32-byte storage slot is 20000.</p><p>We measure the performance of signing, verification, and aggregation algorithms. We compare our MTS construction against alternative threshold signature schemes, when adapted to the multiverse setting. These include: 1) generic zk-SNARK approach; 2) compact certificates in Micali et. al. <ref type="bibr">[38]</ref>; 3) (vanilla) threshold BLS; and, 4) multisignature based on BLS. These are described below.</p><p>Aggregation using zk-SNARKs. The aggregator can function as a prover who convinces the verifier that it knows a threshold number of valid signatures, each verifiable under a distinct public key. To set up this experiment, we use the gnark library <ref type="bibr">[2]</ref> to create a circuit composing multiple instances of the signature verification circuit. <ref type="foot">15</ref> For fairness, we choose the most SNARK-friendly signature scheme available in the gnark library, which is EdDSA signatures -with the gnark frontend, a single EdDSA verification produces roughly 6.2K constraints in the Groth16 system <ref type="bibr">[35]</ref> and 13.1K constraints in the PLONK system <ref type="bibr">[29]</ref>. To get a ballpark estimate, we will assume the verifier has the entire address book mapping nodes to their public keys; alternatively, the proof can be constructed with respect to a commitment to the address book, but that only adds to the prover (aggregator) running time that we report. Note that this approach computes the MiMc <ref type="bibr">[7]</ref> hash function, used in the signature scheme as a random oracle, inside the SNARK circuit. As we show later, the aggregator's running time is prohibitively expensive.</p><p>Compact Certificates. Micali et. al. <ref type="bibr">[38]</ref> introduce compact certificates, based on non-interactive proofs of knowledge in the random oracle model. The certificate proves that signers have a sufficient total weight, while only including logarithmic number of individual signatures. As we show later, the certificate size is orders of magnitude larger than ours, incurring a heavy gas cost.</p><p>Threshold BLS. The (vanilla) threshold BLS system uses a BLS scheme that is setup by a distributed key generation (DKG) protocol, such as Gennaro et. al. <ref type="bibr">[33]</ref> or with the recent improvements in <ref type="bibr">[44]</ref>. Recall that a party produces independent partial signatures for each universe; not surprisingly, as we show later, the network bandwidth usage is prohibitively expensive. BLS Multisignature. We also test MTS based on BLS multisignatures <ref type="bibr">[14]</ref>, <ref type="bibr">[37]</ref>, where rogue key attacks are addressed via proofs-of-possession in a setup phase -see Appendix B for details on this construction. In this scheme: 1) the aggregate signature contains the identity of each signer; and, 2) the aggregate public key is computed by the verifier (e.g. smart contract) by aggregating the public keys of all signers. As we show later, this scheme has expensive (on-chain) verification cost, both in the setup cost (storing linear public keys) and operation cost (multiplying linear group elements).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Signature Size</head><p>We report the signature size in Table <ref type="table">I</ref>. Depending on the scheme, a signature has several components: G 1 and G 2 denote group elements (of size 48 and 96 bytes, respectively) from the source groups of the pairings curve, and F denotes field elements (of size 32 bytes). Except for compact certificates and BLS multisig, all other schemes produce an aggregate signature of constant size, with threshold BLS producing the shortest signature of size 1 group element (48 bytes).</p><p>Compact certificates use logarithmic size proofs; for 128-bit security, soundness requires them to output a certificate of size 7.5-12 KB for 100 parties, and roughly 40-250 KB for 10K parties -Table <ref type="table">I</ref> includes a data point for threshold that is 80% of total weight, and the signature is even larger for lower thresholds. <ref type="foot">16</ref> In fact, for a few hundred nodes, the certificate is larger than simply outputting all signatures, due to the overheads of the Merkle paths -their approach is targeted towards networks with millions of nodes, but that is orders of magnitude larger than any existing blockchain network (Bitcoin has 10K nodes). Therefore, compact certificates are impractical for our use case, especially since the aggregate signature is published and verified on-chain.</p><p>BLS multisig produces a linear size aggregate signature, as the aggregator must communicate which parties have signed (1 bit per party). Though asymptotically worse, it fares better in practice compared to compact certificates; for n nodes, the signature must have 1 group element and n bits -for practical values of n, say 1000 nodes, we have a 224 byte signature.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Verification Time</head><p>Table I also reports verification complexity, in algebraic operations, wall clock time, and Ethereum virtual machine (EVM) gas units (computed using <ref type="bibr">[3]</ref>) -we report the EVM gas cost for both smart contract setup (where the verification key is stored on-chain) and persignature verification. Algebraic operations are of several types: H denotes hash functions, P denotes pairing operations, while group operations in G 1 are either exponentiations or multiplications. Again, with the exception of compact certificates and BLS multisig, all schemes have constant time.</p><p>Verifying compact certificates has the highest gas cost. We are also under-approximating here, as we only include the gas cost of computing keccak256 hashes (costing 28 gas units <ref type="bibr">[4]</ref>) in the Merkle paths and the EdDSA (over Curve25519) signatures (costing 2000 gas units <ref type="bibr">[5]</ref>); i.e., we are not including the cost of verifying the proof against the commitment to the public key set.</p><p>BLS multisigs require linear group additions to compute the aggregate key, causing the gas cost to exceed that of MTS verification beyond a few hundred signers. It also requires the smart contract setup to store n public keys on-chain, which comes out to 60 million gas for 2000 nodes (roughly $1460 at the time of writing).</p><p>The summary of our analysis here is that compact certificates and BLS multisignatures have high verification complexity, when used in the multiverse setting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Bandwidth Requirement</head><p>We now analyze the inbound bandwidth complexity from the point of view of the aggregator. The bandwidth is a proxy for network traffic flowing over P2P channels in the oracle network. We report the results in Table <ref type="table">II</ref>, for select parameters of network size, weight (per party), and the number of universes.</p><p>All three of compact certificates, multisig, and zk-SNARK based approaches use similar bandwidth, as each party sends one signature -bandwidth is independent of the weight or the number of universes.</p><p>In our MTS scheme, the bandwidth is linear in the weight, but constant in the number of universes. Our conclusion from this study is that while MTS is worse than the SNARK approach for large weights, the bandwidth is still under 10 MB for reasonable weights, and therefore not problematic for practical parameters. In contrast, the threshold BLS scheme uses a little under 1 GB for 100 universes, for weight B = 50 -this is for one message, and is quickly unmanageable for our setting, as Chainlink concurrently serves several hundreds of data feeds (with separate message for each feed).</p><p>The summary of the bandwidth analysis here is that the na&#239;ve application of threshold BLS to the multiverse setting is not at all scalable. D. Aggregation Time 1) Unweighted Setting: We start with the unweighted setting because it is a default or natural setting in several applications of threshold signatures. Table III displays the running time of the aggregator for the different schemes in the unweighted setting (i.e., B = 1). We measure how the running time increases as the size of the network grows (while keeping the threshold to be a fixed fraction T = W/2, where W = n equals the number of nodes). Since aggregation is performed for each universe separately, the reported running time is   Both threshold and multisig BLS allow for efficient aggregation, wherein multisig requires at most n group multiplications. The SNARK prover is too inefficient for networks beyond a few hundred parties, as any latency beyond 30 seconds is too long for our setting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 3: MTS Aggregation Running Time</head><p>2) Weighted Setting: One valuable property of compact certificates, multisig, and SNARK is that aggregation is independent of the weights (compact certificates do depend on the threshold); so their performance in the weighted setting is same as Table <ref type="table">III</ref> above. Nevertheless, despite being asymptotically worse, we show that MTS aggregation is significantly more efficient for reasonable weight parameters (e.g., for B &lt; 25). See Figure <ref type="figure">3</ref> for an exhaustive benchmarking of MTS aggregation. MTS aggregation is roughly twice the running time of threshold BLS, as MTS performs multiexponentiation in both groups G 1 and G 2 to compute the three group elements within the signature.</p><p>We find that the majority of time is spent in computing Lagrange coefficients, which is a quadratic operation. Tomescu et al. <ref type="bibr">[44]</ref> shows how to make this computation quasi-linear, by using FFT and evaluating polynomials at roots-of-unity, and we can borrow their techniques.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VII. PRIOR WORKS</head><p>We briefly summarize the existing works on threshold signatures. Since their introduction <ref type="bibr">[25]</ref>, <ref type="bibr">[26]</ref>, a large body of works has studied the security for threshold signature of prominent signature schemes such as ECDSA signature <ref type="bibr">[21]</ref>, <ref type="bibr">[31]</ref>, <ref type="bibr">[32]</ref>, <ref type="bibr">[34]</ref>, Schnorr signature <ref type="bibr">[36]</ref>, and BLS signature <ref type="bibr">[13]</ref>. On the other hand, threshold signature schemes could be generically implemented by using any signature schemes and succinct non-interactive proofs <ref type="bibr">[11]</ref>, <ref type="bibr">[12]</ref>. Recently, Micali et. al. <ref type="bibr">[38]</ref> uses this approach to construct threshold signatures specially tailored for the weighted setting. Finally, multisignatures <ref type="bibr">[14]</ref>, <ref type="bibr">[37]</ref> could be viewed as a special case of the threshold signature, where the aggregate signature certifies that all parties have signed the message.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VIII. CONCLUSION</head><p>We propose a multiverse threshold signature scheme, with compact keys, compact partial signatures, and a non-interactive setup. The final signature size is compact, and allows for fast verification, even on smart contracts. Experiments show that the scheme is practical for application in decentralized oracle networks for blockchains. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Analysis of A * 's winning probability.</head><p>To analyze A * winning probability, we first use a sequence of hybrids to argue that A cannot distinguish whether it is interacting with A * or the real MTS game. Hybrid 0. : In this experiment, when A * interacts with A, it uses the real view. That is, it does not embed any challenges (henceforth, it also uses the honest proof of NIZKPoK instead of a simulated one). The point is that, since this is the real experiment, A will successfully produce a forgery in the MTS game with a non-negligible probability.</p><p>Next, we switch to the description of our adversary A * in the following two hybrids. Hybrid 1. : In this hybrid, we embed the challenges, but still use the secret s to generate the NIZKPoK proof. That is, we first sample s &#8592; F p and define (g s 1 ) = g s 1 . Whenever A queries OKGen state (P i ), first sample vector &#8407; &#947; i &#8712; F B p such that for each j &#8712; [B], with probability 1 &#954; c+1 , &#8407; &#947; i [j] &#8592; F p is randomly sampled, and otherwise being zero. If &#8407; &#947; i = 0 B , then sample (pk i , sk i ) &#8592; KGen(1 &#954; ), otherwise sample &#8407; s i &#8592; F B p directly instead of a PRG seed, and then set pk i = g &#8407; si 1 . Finally Reset pk i &#8592; pk i &#8226; (g s 1 ) &#8407; &#947;i . Later, whenever A queries OSign state (P i , m), we compute &#963; = H(m) s&#8226;&#8407; &#947;i+(F (ski) or &#8407; si) accordingly. Since we never release the secret seed sk i for P i &#8712; R, this hybrid is indistinguishable from the previous hybrid due to the security of PRG. Hybrid 2. : In this hybrid, we simulate the NIZKPoK proof. Specifically, whenever A queries OUGen state (U, &#923;), for all P i &#8712; R &#8745; U , instead of honestly compute &#961; i = (evals 1 , g k 2 , &#960; DLog ) &#8592; UGen on ((U, &#923;), sk i ) using the secret sk i . We simulate the proof &#960; DLog using the simulator of NIZKPoK. Indistinguishability immediately follows from the zeroknowledge property. Hybrid 3. : In this hybrid, we also program the random oracle and embed the challenge g r 2 . Additionally, we will also declare failure at those places where ( <ref type="formula">1</ref>) we cannot answer without the knowledge of s or r (in particular, at step 2(c) and 2(e)) and (2) A * fails since it did not successfully embed a challenge (in particular, at step 3 and 4). Note that, the only difference from hybrid 2 is that A * will now sometimes declare failure and abort. For example, it happens if the adversary corrupts an honest party. However, conditioned on A * succeeding in producing a forgery, these failures will not happen with an independent and non-negligible probability. For instance, for each partial signature requested, there is a probability of 1/q H &#8226; 1/&#954; c+1 that we will embed both the public key and the hash H(m). Therefore, for each partial signature, there is a probability 1/q H &#8226; 1/&#954; c+1 of failure. Since the adversary requests at most q H &#8226; &#954; c partial signatures. The probability that A * never fail is at least (1 -1/q H &#8226; 1/&#954; c+1 ) q H &#8226;&#954; c , which is non-negligible. Similarly, we could bound the other failure probability. We note that this analysis is entirely analogous to the security proof of the BLS signature as one program the random oracle and embed the challenges. We refer the readers to <ref type="bibr">[16]</ref> for a more elaborated analysis of this.</p><p>Therefore, we have established that when A * interacts with A, A will successfully output forgery at step 2 with a non-negligible probability. Finally, we analyze the probability that A * successfully transforms the output of A to the solution of the co-CDH challenge.</p><p>Invoking Knowledge Assumption. Next, we argue that, at step 5, A * will indeed be able to extract a linear combination &#8407; &#955;. Indeed, we could invoke Assumption 2 where the group element h 1 , . . . , h N corresponds to the public points of the polynomial g f (-1) 1 , g f (-2) 1 , . . .. The random exponent k &#8242; corresponds to some honest party's k i . (Since the adversary succeeds in producing a forgery, there must be an honest party within the universe.) The maliciously chosen k &#8242;&#8242; corresponds to j&#824; =i &#945; j &#8226; k j . Finally, the exponent k corresponds to k = &#945; &#8226; k &#8242; + k &#8242;&#8242; = i &#945; i &#8226; k i . Now, we fix all the randomness of A * and {&#945; j } j&#824; =i and treat A as an oracle-aided circuit <ref type="foot">17</ref> that plays the security game of Assumption 2 and outputs two elements &#963; 0 , &#963; 1 . Since</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0"><p>At the time of writing, Chainlink is the largest oracle network, comprising over 300 participant nodes, serving over 2000 data feeds, and generating over $4.5M USD in monthly revenue for the participants.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1"><p>In Chainlink's protocol<ref type="bibr">[19]</ref>, for each message to be signed, there is a single leader who prepares the message and requests all nodes (that are supporting the relevant price feed, for instance) to sign it.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_2"><p>In a weighted setting, instead of a T out of N access structure, parties are assigned different weights, and aggregation is possible only when partial signatures are produced by parties whose combined weight is larger or equal to the threshold. A special case of this is the standard threshold setting, in that each party has equal weight.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_3"><p>Note that this aggregate signature is not unique, as any multisignature aggregated from enough partial signatures will pass verification.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_4"><p>For instance, by Lagrange interpolation over degree-2 polynomial, we have f (-1) = 6f (1) -8f (2) + 3f (3). In the first universe with ordering P 1 , P 2 , P 3 , the adversary learns 6sk 1 -8sk 2 . In the second universe with ordering P 4 , P 1 , and P 2 , it learns -8sk 1 + 3sk 2 . In any field F, where (6, -8) and (-8, 3) are linearly independent, the adversary could fully reconstruct sk 1 and sk 2 .</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_5"><p>The actual protocol computes k = i &#945; i &#8226; k i , where &#945; i is the output of the random oracle on input (g k 1 , . . . , g kn , g k i ). Similar to<ref type="bibr">[14]</ref>, this is to prevent attacks from rushing adversaries.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_6"><p>The adversary is allowed to directly add malicious parties without using these oracles.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_7"><p>For the sake of simplicity, we assume that the adversary is restricted to instantiate a universe for a (&#923;, U ) only once. We stress that this assumption is made only for simplicity and our schemes remain secure even if the same universe is instantiated multiple times.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="11" xml:id="foot_8"><p>We note that our security game allows the adversary to launch nonmalleability attacks. For instance, the adversary could copy an honest party's public key (and its corresponding NIZKPoK) as its own public key. Intuitively, these attacks, however, are not helpful for producing a valid forgery. Our security proof implicitly proves this.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_9"><p>In the non-threshold setting a "trivial forgery" is simply defined by the signatures obtained by querying the signing oracle.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="15" xml:id="foot_10"><p>Alternatively, we could have produced k independent Groth16 proofs, and aggregated them using Bunz et. al.<ref type="bibr">[20]</ref> (implemented in<ref type="bibr">[30]</ref>), that results in O(log(k)) sized proofs. Recursive composition techniques also exist<ref type="bibr">[18]</ref>, but they are relatively inefficient.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="16" xml:id="foot_11"><p>Irrespective of the number of nodes and weights, the certificate contains the following number of signatures (in addition to the Merkle path hashes) for 128-bit security : 1343 for T = 0.55W, 702 for T = 0.6 W, 380 for T = 0.7 W, 272 for T = 0.8W, and 217 for T = 0.9W.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="17" xml:id="foot_12"><p>Here, A has access to the random oracle H.</p></note>
		</body>
		</text>
</TEI>
