In a traitor tracing (TT) system for n users, every user has his/her own secret key. Content providers can encrypt messages using a public key, and each user can decrypt the ciphertext using his/her secret key. Suppose some of the n users collude to construct a pirate decoding box. Then the tracing scheme has a special algorithm, called π³ππΊπΌπΎ , which can identify at least one of the secret keys used to construct the pirate decoding box. Traditionally, the trace algorithm output only the βindexβ associated with the traitors. As a result, to use such systems, either a central master authority must map the indices to actual identities, or there should be a public mapping of indices to identities. Both these options are problematic, especially if we need public tracing with anonymity of users. Nishimaki, Wichs, and Zhandry (NWZ) [Eurocrypt 2016] addressed this problem by constructing a traitor tracing scheme where the identities of users are embedded in the secret keys, and the trace algorithm, given a decoding box D, can recover the entire identities of the traitors. We call such schemes βEmbedded Identity Traitor Tracingβ schemes. NWZ constructed such schemes based on adaptively secure functional encryption (FE). Currently, the only known constructions of FE schemes are based on nonstandard assumptions such as multilinear maps and iO. In this work, we study the problem of embedded identities TT based on standard assumptions. We provide a range of constructions based on different assumptions such as public key encryption (PKE), bilinear maps and the Learning with Errors (LWE) assumption. The different constructions have different efficiency trade offs. In our PKE based construction, the ciphertext size grows linearly with the number of users; the bilinear maps based construction has sub-linear (πβ ) sized ciphertexts. Both these schemes have public tracing. The LWE based scheme is a private tracing scheme with optimal ciphertexts (i.e., log(π)). Finally, we also present other notions of traitor tracing, and discuss how they can be build in a generic manner from our base embedded identity TT scheme.
more »
« less
Just How Hard Are Rotations of Z^n? Algorithms and Cryptography with the Simplest Lattice
We study the computational problem of finding a shortest non-zero vector in a rotation of β€π , which we call β€ SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for β€ SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve β€ SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in 2π+π(π) time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for β€ SVP and instead ask what else we can say about the problem. E.g., can we find any non-trivial speedup over the best known SVP algorithm? And, if β€ SVP actually is hard, then what consequences would follow? Our results are as follows. We show that β€ SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce β€ SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of β€ SVP from SVP. As a consequence of this reduction, we obtain a 2π/2+π(π) -time algorithm for β€ SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of latticesβsemi-stable lattices with not-too-large π1 .) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) β€ SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of β€π from either a lattice with all non-zero vectors longer than π/logπβΎβΎβΎβΎβΎβΎβΎβ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of β€π . The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that ββ€π has the largest smoothing parameter.β We show a distribution of bases π for rotations of β€π such that, if β€ SVP is hard for any input basis, then β€ SVP is hard on input π . This gives a satisfying theoretical resolution to the problem of sampling hard bases for β€π , which was studied by Blanks and Miller [9]. This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices [15], and they also used this to analyze the security of a public-key encryption scheme. Similar ideas also appeared in [5, 11, 20] in different contexts.) We perform experiments to determine how practical basis reduction performs on bases of β€π that are generated in different ways and how heuristic sieving algorithms perform on β€π . Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the βprovably hardβ distribution of bases described above. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on β€π .
more »
« less
- Award ID(s):
- 2122230
- PAR ID:
- 10428127
- Editor(s):
- Hazay, Carmit; Stam, Martijn
- Date Published:
- Journal Name:
- Lecture notes in computer science
- Volume:
- 14004
- ISSN:
- 0302-9743
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Qiu, Meikang (Ed.)Lattice sieving is currently the leading class of algorithms for solving the shortest vector problem over lattices. The computational difficulty of this problem is the basis for constructing secure post-quantum public-key cryptosystems based on lattices. In this paper, we present a novel massively parallel approach for solving the shortest vector problem using lattice sieving and hardware acceleration. We combine previously reported algorithms with a proper caching strategy and develop hardware architecture. The main advantage of the proposed approach is eliminating the overhead of the data transfer between a CPU and a hardware accelerator. The authors believe that this is the first such architecture reported in the literature to date and predict to achieve up to 8 times higher throughput when compared to a multi-core high-performance CPU. Presented methods can be adapted for other sieving algorithms hard to implement in FPGAs due to the communication and memory bottleneck.more » « less
-
Servedio, Rocco (Ed.)We study the complexity of lattice problems in a world where algorithms, reductions, and protocols can run in superpolynomial time, revisiting four foundational results: two worst-case to average-case reductions and two protocols. We also show a novel protocol. 1. We prove that secret-key cryptography exists if OΛ(nβΎβ)-approximate SVP is hard for 2Ξ΅n-time algorithms. I.e., we extend to our setting (Micciancio and Regev's improved version of) Ajtai's celebrated polynomial-time worst-case to average-case reduction from OΛ(n)-approximate SVP to SIS. 2. We prove that public-key cryptography exists if OΛ(n)-approximate SVP is hard for 2Ξ΅n-time algorithms. This extends to our setting Regev's celebrated polynomial-time worst-case to average-case reduction from OΛ(n1.5)-approximate SVP to LWE. In fact, Regev's reduction is quantum, but ours is classical, generalizing Peikert's polynomial-time classical reduction from OΛ(n2)-approximate SVP. 3. We show a 2Ξ΅n-time coAM protocol for O(1)-approximate CVP, generalizing the celebrated polynomial-time protocol for O(n/lognβΎβΎβΎβΎβΎβΎβΎβ)-CVP due to Goldreich and Goldwasser. These results show complexity-theoretic barriers to extending the recent line of fine-grained hardness results for CVP and SVP to larger approximation factors. (This result also extends to arbitrary norms.) 4. We show a 2Ξ΅n-time co-non-deterministic protocol for O(lognβΎβΎβΎβΎβΎβ)-approximate SVP, generalizing the (also celebrated!) polynomial-time protocol for O(nβΎβ)-CVP due to Aharonov and Regev. 5. We give a novel coMA protocol for O(1)-approximate CVP with a 2Ξ΅n-time verifier. All of the results described above are special cases of more general theorems that achieve time-approximation factor tradeoffs.more » « less
-
Given a set P of n weighted points and a set S of m disks in the plane, the hitting set problem is to compute a subset πβ² of points of P such that each disk contains at least one point of πβ² and the total weight of all points of πβ² is minimized. The problem is known to be NP-hard. In this paper, we consider a line-constrained version of the problem in which all disks are centered on a line β. We present an π((π+π)log(π+π)+π logπ) time algorithm for the problem, where π is the number of pairs of disks that intersect. For the unit-disk case where all disks have the same radius, the running time can be reduced to π((π+π)log(π+π)). In addition, we solve the problem in π((π+π)log(π+π)) time in the πΏβ and πΏ1 metrics, in which a disk is a square and a diamond, respectively.more » « less
-
The authors present an algorithm that, given an n-vertex m-edge Eulerian graph with polynomially bounded weights, computes an π ~ ( π log β‘ 2 π β π β 2 ) O ~ (nlog 2 nβ Ξ΅ β2 )-edge π Ξ΅-approximate Eulerian sparsifier with high probability in π ~ ( π log β‘ 3 π ) O ~ (mlog 3 n) time (where π ~ ( β ) O ~ (β ) hides polyloglog(n) factors). By a reduction from Peng-Song (STOC β22), this yields an π ~ ( π log β‘ 3 π + π log β‘ 6 π ) O ~ (mlog 3 n+nlog 6 n)-time algorithm for solving n-vertex m-edge Eulerian Laplacian systems with polynomially bounded weights with high probability, improving on the previous state-of-the-art runtime of Ξ© ( π log β‘ 8 π + π log β‘ 23 π ) Ξ©(mlog 8 n+nlog 23 n). They also provide a polynomial-time algorithm that computes sparsifiers with π ( min β‘ ( π log β‘ π β π β 2 + π log β‘ 5 / 3 π β π β 4 / 3 , π log β‘ 3 / 2 π β π β 2 ) ) O(min(nlognβ Ξ΅ β2 +nlog 5/3 nβ Ξ΅ β4/3 ,nlog 3/2 nβ Ξ΅ β2 )) edges, improving the previous best bounds. Furthermore, they extend their techniques to yield the first π ( π β polylog ( π ) ) O(mβ polylog(n))-time algorithm for computing π ( π π β 1 β polylog ( π ) ) O(nΞ΅ β1 β polylog(n))-edge graphical spectral sketches, along with a natural Eulerian generalization. Unlike prior approaches using short cycle or expander decompositions, their algorithms leverage a new effective resistance decomposition scheme, combined with natural sampling and electrical routing for degree balance. The analysis applies asymmetric variance bounds specialized to Eulerian Laplacians and tools from discrepancy theory.more » « less
An official website of the United States government

