skip to main content

Search for: All records

Award ID contains: 1910659

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 1, 2023
  2. Locally Decodable Codes (LDCs) are error-correcting codes for which individual message symbols can be quickly recovered despite errors in the codeword. LDCs for Hamming errors have been studied extensively in the past few decades, where a major goal is to understand the amount of redundancy that is necessary and sufficient to decode from large amounts of error, with small query complexity. Despite exciting progress, we still don't have satisfactory answers in several important parameter regimes. For example, in the case of 3-query LDCs, the gap between existing constructions and lower bounds is superpolynomial in the message length. In this work we study LDCs for insertion and deletion errors, called Insdel LDCs. Their study was initiated by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), who gave a reduction from Hamming LDCs to Insdel LDCs with a small blowup in the code parameters. On the other hand, the only known lower bounds for Insdel LDCs come from those for Hamming LDCs, thus there is no separation between them. Here we prove new, strong lower bounds for the existence of Insdel LDCs. In particular, we show that 2-query linear Insdel LDCs do not exist, and give an exponential lower bound for the lengthmore »of all q-query Insdel LDCs with constant q. For q ≥ 3 our bounds are exponential in the existing lower bounds for Hamming LDCs. Furthermore, our exponential lower bounds continue to hold for adaptive decoders, and even in private-key settings where the encoder and decoder share secret randomness. This exhibits a strict separation between Hamming LDCs and Insdel LDCs. Our strong lower bounds also hold for the related notion of Insdel LCCs (except in the private-key setting), due to an analogue to the Insdel notions of a reduction from Hamming LCCs to LDCs. Our techniques are based on a delicate design and analysis of hard distributions of insertion and deletion errors, which depart significantly from typical techniques used in analyzing Hamming LDCs.« less
  3. Berenbrink, Petra and (Ed.)
    A directed acyclic graph G = (V,E) is said to be (e,d)-depth robust if for every subset S ⊆ V of |S| ≤ e nodes the graph G-S still contains a directed path of length d. If the graph is (e,d)-depth-robust for any e,d such that e+d ≤ (1-ε)|V| then the graph is said to be ε-extreme depth-robust. In the field of cryptography, (extremely) depth-robust graphs with low indegree have found numerous applications including the design of side-channel resistant Memory-Hard Functions, Proofs of Space and Replication and in the design of Computationally Relaxed Locally Correctable Codes. In these applications, it is desirable to ensure the graphs are locally navigable, i.e., there is an efficient algorithm GetParents running in time polylog|V| which takes as input a node v ∈ V and returns the set of v’s parents. We give the first explicit construction of locally navigable ε-extreme depth-robust graphs with indegree O(log |V|). Previous constructions of ε-extreme depth-robust graphs either had indegree ω̃(log² |V|) or were not explicit.
  4. The Schnorr signature scheme is an efficient digital signature scheme with short signature lengths, i.e., $4k$-bit signatures for $k$ bits of security. A Schnorr signature $\sigma$ over a group of size $p\approx 2^{2k}$ consists of a tuple $(s,e)$, where $e \in \{0,1\}^{2k}$ is a hash output and $s\in \mathbb{Z}_p$ must be computed using the secret key. While the hash output $e$ requires $2k$ bits to encode, Schnorr proposed that it might be possible to truncate the hash value without adversely impacting security. In this paper, we prove that \emph{short} Schnorr signatures of length $3k$ bits provide $k$ bits of multi-user security in the (Shoup's) generic group model and the programmable random oracle model. We further analyze the multi-user security of key-prefixed short Schnorr signatures against preprocessing attacks, showing that it is possible to obtain secure signatures of length $3k + \log S + \log N$ bits. Here, $N$ denotes the number of users and $S$ denotes the size of the hint generated by our preprocessing attacker, e.g., if $S=2^{k/2}$, then we would obtain secure $3.75k$-bit signatures for groups of up to $N \leq 2^{k/4}$ users. Our techniques easily generalize to several other Fiat-Shamir-based signature schemes, allowing us to establish analogousmore »results for Chaum-Pedersen signatures and Katz-Wang signatures. As a building block, we also analyze the $1$-out-of-$N$ discrete-log problem in the generic group model, with and without preprocessing.« less
  5. Mikołaj Boja´nczyk, Emanuela Merelli (Ed.)
    We initiate a systematic study of algorithms that are both differentially-private and run in sublinear time for several problems in which the goal is to estimate natural graph parameters. Our main result is a differentially-private $(1+\rho)$-approximation algorithm for the problem of computing the average degree of a graph, for every $\rho>0$. The running time of the algorithm is roughly the same (for sparse graphs) as its non-private version proposed by Goldreich and Ron (Sublinear Algorithms, 2005). We also obtain the first differentially-private sublinear-time approximation algorithms for the maximum matching size and the minimum vertex cover size of a graph. An overarching technique we employ is the notion of \emph{coupled global sensitivity} of randomized algorithms. Related variants of this notion of sensitivity have been used in the literature in ad-hoc ways. Here we formalize the notion and develop it as a unifying framework for privacy analysis of randomized approximation algorithms.
  6. We formally introduce, define, and construct {\em memory-hard puzzles}. Intuitively, for a difficulty parameter $t$, a cryptographic puzzle is memory-hard if any parallel random access machine (PRAM) algorithm with ``small'' cumulative memory complexity ($\ll t^2$) cannot solve the puzzle; moreover, such puzzles should be both ``easy'' to generate and be solvable by a sequential RAM algorithm running in time $t$. Our definitions and constructions of memory-hard puzzles are in the standard model, assuming the existence of indistinguishability obfuscation (\iO) and one-way functions (OWFs), and additionally assuming the existence of a {\em memory-hard language}. Intuitively, a language is memory-hard if it is undecidable by any PRAM algorithm with ``small'' cumulative memory complexity, while a sequential RAM algorithm running in time $t$ can decide the language. Our definitions and constructions of memory-hard objects are the first such definitions and constructions in the standard model without relying on idealized assumptions (such as random oracles). We give two applications which highlight the utility of memory-hard puzzles. For our first application, we give a construction of a (one-time) {\em memory-hard function} (MHF) in the standard model, using memory-hard puzzles and additionally assuming \iO and OWFs. For our second application, we show any cryptographic puzzle (\eg,more »memory-hard, time-lock) can be used to construct {\em resource-bounded locally decodable codes} (LDCs) in the standard model, answering an open question of Blocki, Kulkarni, and Zhou (ITC 2020). Resource-bounded LDCs achieve better rate and locality than their classical counterparts under the assumption that the adversarial channel is resource bounded (e.g., a low-depth circuit). Prior constructions of MHFs and resource-bounded LDCs required idealized primitives like random oracles.« less
  7. null (Ed.)
  8. We construct locally decodable codes (LDCs) to correct insertion-deletion errors in the setting where the sender and receiver share a secret key or where the channel is resource-bounded. Our constructions rely on a so-called ``Hamming-to-InsDel'' compiler (Ostrovsky and Paskin-Cherniavsky, ITS '15 \& Block et al., FSTTCS '20), which compiles any locally decodable Hamming code into a locally decodable code resilient to insertion-deletion (InsDel) errors. While the compilers were designed for the classical coding setting, we show that the compilers still work in a secret key or resource-bounded setting. Applying our results to the private key Hamming LDC of Ostrovsky, Pandey, and Sahai (ICALP '07), we obtain a private key InsDel LDC with constant rate and polylogarithmic locality. Applying our results to the construction of Blocki, Kulkarni, and Zhou (ITC '20), we obtain similar results for resource-bounded channels; i.e., a channel where computation is constrained by resources such as space or time.