 Editors:
 Qiu, Meikang
 Award ID(s):
 1801512
 Publication Date:
 NSFPAR ID:
 10281378
 Journal Name:
 International Conference on Algorithms and Architectures for Parallel Processing, ICA3PP 2020
 Volume:
 LNCS 12452
 Page Range or eLocationID:
 661  680
 Sponsoring Org:
 National Science Foundation
More Like this

Latticebased algorithms in cryptanalysis often search for a target vector satisfying integer linear constraints as a shortest or closest vector in some lattice. In this work, we observe that these formulations may discard nonlinear information from the underlying application that can be used to distinguish the target vector even when it is far from being uniquely close or short. We formalize lattice problems augmented with a predicate distinguishing a target vector and give algorithms for solving instances of these problems. We apply our techniques to latticebased approaches for solving the Hidden Number Problem, a popular technique for recovering secret DSA or ECDSA keys in sidechannel attacks, and demonstrate that our algorithms succeed in recovering the signing key for instances that were previously believed to be unsolvable using lattice approaches. We carried out extensive experiments using our estimation and solving framework, which we also make available with this work.

Graph processing recently received intensive interests in light of a wide range of needs to understand relationships. It is wellknown for the poor locality and high memory bandwidth requirement. In conventional architectures, they incur a significant amount of data movements and energy consumption which motivates several hardware graph processing accelerators. The current graph processing accelerators rely on memory access optimizations or placing computation logics close to memory. Distinct from all existing approaches, we leverage an emerging memory technology to accelerate graph processing with analog computation. This paper presents GRAPHR, the first ReRAMbased graph processing accelerator. GRAPHR follows the principle of neardata processing and explores the opportunity of performing massive parallel analog operations with low hardware and energy cost. The analog computation is suitable for graph processing because: 1) The algorithms are iterative and could inherently tolerate the imprecision; 2) Both probability calculation (e.g., PageRank and Collaborative Filtering) and typical graph algorithms involving integers (e.g., BFS/SSSP) are resilient to errors. The key insight of GRAPHR is that if a vertex program of a graph algorithm can be expressed in sparse matrix vector multiplication (SpMV), it can be efficiently performed by ReRAM crossbar. We show that this assumption is generally true formore »

Abstract Quantum annealing is a promising approach to heuristically solving difficult combinatorial optimization problems. However, the connectivity limitations in current devices lead to an exponential degradation of performance on general problems. We propose an architecture for a quantum annealer that achieves full connectivity and full programmability while using a number of physical resources only linear in the number of spins. We do so by application of carefully engineered periodic modulations of oscillatorbased qubits, resulting in a Floquet Hamiltonian in which all the interactions are tunable. This flexibility comes at the cost of the coupling strengths between qubits being smaller than they would be compared with direct coupling, which increases the demand on coherence times with increasing problem size. We analyze a specific hardware proposal of our architecture based on Josephson parametric oscillators. Our results show how the minimumcoherencetime requirements imposed by our scheme scale, and we find that the requirements are not prohibitive for fully connected problems with up to at least 1000 spins. Our approach could also have impact beyond quantum annealing, since it readily extends to bosonic quantum simulators, and would allow the study of models with arbitrary connectivity between lattice sites.

We consider the highdimensional linear regression problem, where the algorithmic goal is to efficiently infer an unknown feature vector $\beta^*\in\mathbb{R}^p$ from its linear measurements, using a small number $n$ of samples. Unlike most of the literature, we make no sparsity assumption on $\beta^*$, but instead adopt a different regularization: In the noiseless setting, we assume $\beta^*$ consists of entries, which are either rational numbers with a common denominator $Q\in\mathbb{Z}^+$ (referred to as $Q$rationality); or irrational numbers taking values in a rationally independent set of bounded cardinality, known to learner; collectively called as the mixedrange assumption. Using a novel combination of the Partial Sum of Least Squares (PSLQ) integer relation detection, and the LenstraLenstraLov\'asz (LLL) lattice basis reduction algorithms, we propose a polynomialtime algorithm which provably recovers a $\beta^*\in\mathbb{R}^p$ enjoying the mixedrange assumption, from its linear measurements $Y=X\beta^*\in\mathbb{R}^n$ for a large class of distributions for the random entries of $X$, even with one measurement ($n=1$). In the noisy setting, we propose a polynomialtime, latticebased algorithm, which recovers a $\beta^*\in\mathbb{R}^p$ enjoying the $Q$rationality property, from its noisy measurements $Y=X\beta^*+W\in\mathbb{R}^n$, even from a single sample ($n=1$). We further establish that for large $Q$, and normal noise, this algorithm tolerates informationtheoretically optimal level ofmore »

Zeroknowledge succinct arguments of knowledge (zkSNARKs) enable efficient privacypreserving proofs of membership for general NP languages. Our focus in this work is on postquantum zkSNARKs, with a focus on minimizing proof size. Currently, there is a 1000x gap in the proof size between the best prequantum constructions and the best postquantum ones. Here, we develop and implement new latticebased zkSNARKs in the designatedverifier preprocessing model. With our construction, after an initial preprocessing step, a proof for an NP relation of size 2^20 is just over 16 KB. Our proofs are 10.3x shorter than previous postquantum zkSNARKs for general NP languages. Compared to previous latticebased zkSNARKs (also in the designatedverifier preprocessing model), we obtain a 42x reduction in proof size and a 60x reduction in the prover's running time, all while achieving a much higher level of soundness. Compared to the shortest prequantum zkSNARKs by Groth (Eurocrypt 2016), the proof size in our latticebased construction is 131x longer, but both the prover and the verifier are faster (by 1.2x and 2.8x, respectively). Our construction follows the general blueprint of Bitansky et al. (TCC 2013) and Boneh et al. (Eurocrypt 2017) of combining a linear probabilistically checkable proof (linear PCP) together withmore »