skip to main content


Title: Computational Hardness of Collective Coin-Tossing Protocols
Ben-Or and Linial, in a seminal work, introduced the full information model to study collective coin-tossing protocols. Collective coin-tossing is an elegant functionality providing uncluttered access to the primary bottlenecks to achieve security in a specific adversarial model. Additionally, the research outcomes for this versatile functionality has direct consequences on diverse topics in mathematics and computer science. This survey summarizes the current state-of-the-art of coin-tossing protocols in the full information model and recent advances in this field. In particular, it elaborates on a new proof technique that identifies the minimum insecurity incurred by any coin-tossing protocol and, simultaneously, constructs the coin-tossing protocol achieving that insecurity bound. The combinatorial perspective into this new proof-technique yields new coin-tossing protocols that are more secure than well-known existing coin-tossing protocols, leading to new isoperimetric inequalities over product spaces. Furthermore, this proof-technique’s algebraic reimagination resolves several long-standing fundamental hardness-of-computation problems in cryptography. This survey presents one representative application of each of these two perspectives.  more » « less
Award ID(s):
2055605
NSF-PAR ID:
10331238
Author(s) / Creator(s):
Date Published:
Journal Name:
Entropy
Volume:
23
Issue:
1
ISSN:
1099-4300
Page Range / eLocation ID:
44
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Quantum key distribution (QKD) has established itself as a groundbreaking technology, showcasing inherent security features that are fundamentally proven. Qubit-based QKD protocols that rely on binary encoding encounter an inherent constraint related to the secret key capacity. This limitation restricts the maximum secret key capacity to one bit per photon. On the other hand, qudit-based QKD protocols have their advantages in scenarios where photons are scarce and noise is present, as they enable the transmission of more than one secret bit per photon. While proof-of-principle entangled-based qudit QKD systems have been successfully demonstrated over the years, the current limitation lies in the maximum distribution distance, which remains at 20 km fiber distance. Moreover, in these entangled high-dimensional QKD systems, the witness and distribution of quantum steering have not been shown before. Here we present a high-dimensional time-bin QKD protocol based on energy-time entanglement that generates a secure finite-length key capacity of 2.39 bit/coincidences and secure cryptographic finite-length keys at 0.24 Mbits s−1in a 50 km optical fiber link. Our system is built entirely using readily available commercial off-the-shelf components, and secured by nonlocal dispersion cancellation technique against collective Gaussian attacks. Furthermore, we set new records for witnessing both energy-time entanglement and quantum steering over different fiber distances. When operating with a quantum channel loss of 39 dB, our system retains its inherent characteristic of utilizing large-alphabet. This enables us to achieve a secure key rate of 0.30 kbits s−1and a secure key capacity of 1.10 bit/coincidences, considering finite-key effects. Our experimental results closely match the theoretical upper bound limit of secure cryptographic keys in high-dimensional time-bin QKD protocols (Moweret al2013Phys. Rev.A87062322; Zhanget al2014Phys. Rev. Lett.112120506), and outperform recent state-of-the-art qubit-based QKD protocols in terms of secure key throughput using commercial single-photon detectors (Wengerowskyet al2019Proc. Natl Acad. Sci.1166684; Wengerowskyet al2020npj Quantum Inf.65; Zhanget al2014Phys. Rev. Lett.112120506; Zhanget al2019Nat. Photon.13839; Liuet al2019Phys. Rev. Lett.122160501; Zhanget al2020Phys. Rev. Lett.125010502; Weiet al2020Phys. Rev.X10031030). The simple and robust entanglement-based high-dimensional time-bin protocol presented here provides potential for practical long-distance quantum steering and QKD with multiple secure bits-per-coincidence, and higher secure cryptographic keys compared to mature qubit-based QKD protocols.

     
    more » « less
  2. SPRINGER (Ed.)
    In this work we study the problem of minimizing the round complexity for securely evaluating multiparty functionalities while making black-box use of polynomial time assumptions. In Eurocrypt 2016, Garg et al. showed that assuming all parties have access to a broadcast channel, then at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. A sequence of works follow-up the result of Garg et al. matching this lower bound under a variety of assumptions. Unfortunately, none of these works make black-box use of the underlying cryptographic primitives. In Crypto 2021, Ishai, Khurana, Sahai, and Srinivasan came closer to matching the four-round lower bound, obtaining a five-round protocol that makes black-box use of oblivious transfer and PKE with pseudorandom public keys. In this work, we show how to realize any input-less functionality (e.g., coin-tossing, generation of key-pairs, and so on) in four rounds while making black-box use of two-round oblivious transfer. As an additional result, we construct the first four-round MPC protocol for generic functionalities that makes black-box use of the underlying primitives, achieving security against non-aborting adversaries. Our protocols are based on a new primitive called list two-party computation. This primitive offers relaxed security compared to the standard notion of secure two-party computation. Despite this relaxation, we argue that this tool suffices for our applications. List two-party computation is of independent interest, as we argue it can also be used for the generation of setups, like oblivious transfer correlated randomness, in three rounds. Prior to our work, generating such a setup required at least four rounds of interactions or a trusted third party. 
    more » « less
  3. Many proofs of interactive cryptographic protocols (e.g., as in Universal Composability) operate by proving the protocol at hand to be observationally equivalent to an idealized specification. While pervasive, formal tool support for observational equivalence of cryptographic protocols is still a nascent area of research. Current mechanization efforts tend to either focus on diff-equivalence, which establishes observational equivalence between protocols with identical control structures, or require an explicit witness for the observational equivalence in the form of a bisimulation relation. Our goal is to simplify proofs for cryptographic protocols by introducing a core calculus, IPDL, for cryptographic observational equivalences. Via IPDL, we aim to address a number of theoretical issues for cryptographic proofs in a simple manner, including probabilistic behaviors, distributed message-passing, and resource-bounded adversaries and simulators. We demonstrate IPDL on a number of case studies, including a distributed coin toss protocol, Oblivious Transfer, and the GMW multi-party computation protocol. All proofs of case studies are mechanized via an embedding of IPDL into the Coq proof assistant. 
    more » « less
  4. Summary

    Wait‐freedom guarantees that all processes complete their operations in a finite number of steps regardless of the delay of any process. Combinatorial topology has been proposed in the literature as a formal verification technique to prove the wait‐free computability of decision tasks. Wait‐freedom is proved through the properties of a static topological structure that expresses all possible combinations of execution paths of the protocol solving the decision task. The practical application of combinatorial topology as a formal verification technique is limited because the existing theory only considers protocols in which the manner of communication between processes is through read‐write memory. This research proposes an extension to the existing theory, called the CAS‐extended model. The extended theory includes Compare‐And‐Swap (CAS) and Load‐Link/Store‐Conditional (LL/SC), which are atomic primitives used to achieve wait‐freedom in state‐of‐the‐art protocols. The CAS‐extended model theory can be used to formally verify wait‐free algorithms used in practice, such as concurrent data structures. We present new definitions detailing the construction of a protocol complex in the CAS‐extended model. As a proof‐of‐concept, we formally verify a wait‐free queue with three processes using the CAS‐extended combinatorial topology.

     
    more » « less
  5. We present a methodology for using the EasyCrypt proof assistant (originally designed for mechanizing the generation of proofs of game-based security of cryptographic schemes and protocols) to mechanize proofs of security of cryptographic protocols within the universally composable (UC) security framework. This allows, for the first time, the mechanization and formal verification of the entire sequence of steps needed for proving simulation-based security in a modular way: * Specifying a protocol and the desired ideal functionality. * Constructing a simulator and demonstrating its validity, via reduction to hard computational problems. * Invoking the universal composition operation and demonstrating that it indeed preserves security. We demonstrate our methodology on a simple example: stating and proving the security of secure message communication via a one-time pad, where the key comes from a Diffie-Hellman key-exchange, assuming ideally authenticated communication. We first put together EasyCrypt-verified proofs that: (a) the Diffie-Hellman protocol UC-realizes an ideal key-exchange functionality, assuming hardness of the Decisional Diffie-Hellman problem, and (b) one-time-pad encryption, with a key obtained using ideal key-exchange, UC-realizes an ideal secure-communication functionality. We then mechanically combine the two proofs into an EasyCrypt-verified proof that the composed protocol realizes the same ideal secure-communication functionality. Although formulating a methodology that is both sound and workable has proven to be a complex task, we are hopeful that it will prove to be the basis for mechanized UC security analyses for significantly more complex protocols. 
    more » « less