skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on January 1, 2026

Title: Stationary Syndrome Decoding for Improved PCGs
Syndrome decoding (SD), and equivalently Learning Parity with Noise (LPN), is a fundamental problem in cryptography, which states that for a field F, some compressing public matrix G ∈ F^k×n ,and a secret sparse vector e ∈ F^n sampled from some noise distribution, G e is indistinguishable from uniform. Recently, the SD has gained significant interest due to its use in pseudorandom correlation generators (PCGs). In pursuit of better efficiency, we propose a new assumption called Stationary Syndrome Decoding (SSD). In SSD,weconsider q correlated noise vectors e1,... , eq ∈ F^n and associated instances G1 e1,..., Gq eq where the noise vectors are restricted to having non-zeros in the same small subset of t positions L ⊂ [n]. That is, for all i ∈ L, ej,i is uniformly random, while for all other i, ej,i =0. Although naively reusing the noise vector renders SD and LPN insecure via simple Gaussian elimination, we observe known attacks do not extend to our correlated noise. We show SSD is unconditionally secure against so-called linear attacks, e.g., advanced information set decoding and representation techniques (Esser and Santini, Crypto 2024). We further adapt the state-of-the-art nonlinear attack (Briaud and Øygarden, Eurocrypt 2023) to SSD and demonstrate both theoretically and exper-imentally resistance to the attack. We apply SSD to PCGs to amortize the cost of noise generation pro-tocol. For OT and VOLE generation, each instance requires O(t)com-munication instead of O(t log n). For suggested parameters, we observe a1.5× improvement in the running time or between 6 and 18× reduc-tion in communication. For Beaver triple generation using Ring LPN, our techniques have the potential for substantial amortization due to the high concrete overhead of the Ring LPN noise generation.  more » « less
Award ID(s):
2217070
PAR ID:
10635320
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Springer Nature Switzerland
Date Published:
Page Range / eLocation ID:
284 to 317
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper proposes a syndrome sphere decoding (SSD) algorithm. SSD achieves the frame error rate (FER) of maximum likelihood (ML) decoding for a tail-biting convolutional code (TBCC) concatenated with an expurgating linear function (ELF) with significantly lower average and maximum decoding complexity than serial list Viterbi decoding (SLVD). SSD begins by using a Viterbi decoder to find the closest trellis path, which may not be tail-biting and may not satisfy the ELF. This trellis path has a syndrome comprised of the difference between the received ELF and the ELF computed from the received message and the difference between the beginning and ending state of the trellis path. This syndrome is used to find all valid tail-biting codewords that satisfy the ELF constraint and lie within a specified distance from the closest trellis path. The proposed algorithm avoids the complexity of SLVD at the cost of a large table containing all the offsets needed for each syndrome. A hybrid decoder combines SSD and SLVD with a fixed maximum list size, balancing the maximum list size for SLVD against the size of the SSD offset table. 
    more » « less
  2. A s a c om pl e men t t o da ta d edupli cat ion , de lta c om p ress i on fu r- t he r r edu c es t h e dat a vo l u m e by c o m pr e ssi n g n o n - dup li c a t e d ata chunk s r e l a t iv e to t h e i r s i m il a r chunk s (bas e chunk s). H ow ever, ex is t i n g p o s t - d e dup li c a t i o n d e l t a c o m pr e ssi o n a p- p ro a ches fo r bac kup s t or ag e e i t h e r su ffe r f ro m t h e l ow s i m - il a r i t y b e twee n m any de te c ted c hun ks o r m i ss so me po t e n - t i a l s i m il a r c hunks , o r su ffer f r om l ow (ba ckup and r es t ore ) th r oug hpu t du e t o extr a I/ Os f or r e a d i n g b a se c hun ks o r a dd a dd i t i on a l s e r v i c e - d i s r up t ive op e r a t i on s to b a ck up s ys t em s. I n t h i s pa p e r, w e pr opo se L oop D e l t a t o a dd ress the above - m e n t i on e d prob l e m s by an e nha nced em b e ddi n g d e l t a c o m p - r e ss i on sc heme i n d e dup li c a t i on i n a non - i n t ru s ive way. T h e e nha nce d d elt a c o mpr ess ion s che m e co m b in e s f our key t e c h - ni qu e s : (1) du a l - l o c a li t y - b a s e d s i m il a r i t y t r a c k i n g to d e t ect po t e n t i a l si m il a r chun k s b y e x p l o i t i n g both l o g i c a l and ph y - s i c a l l o c a li t y, ( 2 ) l o c a li t y - a wa r e pr e f e t c h i n g to pr efe tc h ba se c hun ks to a vo i d ex t ra I/ Os fo r r e a d i n g ba s e chun ks on t h e w r i t e p at h , (3) c a che -aware fil t e r to avo i d ext r a I/Os f or b a se c hunk s on t he read p at h, a nd (4) i nver sed de l ta co mpressi on t o perf orm de lt a co mpress i o n fo r d at a chunk s t hat a re o th e r wi se f o r b i dd e n to s er ve as ba se c hunk s by r ew r i t i n g t e c hn i qu e s d e s i g n e d t o i m p r ove r es t o re pe rf o rma nc e. E x p e r i m e n t a l re su lts indi ca te t hat L oop D e l t a i ncr ea se s t he c o m pr e ss i o n r a t i o by 1 .2410 .97 t i m e s on t op of d e dup li c a - t i on , wi t hou t no t a b l y a ffe c t i n g th e ba ck up th rou ghpu t, a nd i t i m p r ove s t he res to re p er fo r m an ce b y 1.23.57 t i m e 
    more » « less
  3. Despite their tremendous success in a range of domains, deep learning systems are inherently susceptible to two types of manipulations: adversarial inputs -- maliciously crafted samples that deceive target deep neural network (DNN) models, and poisoned models -- adversely forged DNNs that misbehave on pre-defined inputs. While prior work has intensively studied the two attack vectors in parallel, there is still a lack of understanding about their fundamental connections: what are the dynamic interactions between the two attack vectors? what are the implications of such interactions for optimizing existing attacks? what are the potential countermeasures against the enhanced attacks? Answering these key questions is crucial for assessing and mitigating the holistic vulnerabilities of DNNs deployed in realistic settings. Here we take a solid step towards this goal by conducting the first systematic study of the two attack vectors within a unified framework. Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement" effects between the two attack vectors -- leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e.g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions. 
    more » « less
  4. F or c e d at a f or a fl a p pi n g f oil e n er g y h ar v e st er wit h a cti v e l e a di n g e d g e m oti o n o p er ati n g i n t h e l o w r e d u c e d fr e q u e n c y r a n g e i s c oll e ct e d t o d et er mi n e h o w l e a di n g e d g e m oti o n aff e ct s e n er g y h ar v e sti n g p erf or m a n c e. T h e f oil pi v ot s a b o ut t h e mi dc h or d a n d o p er at e s i n t h e l o w r e d u c e d fr e q u e n c y r a n g e of 𝑓𝑓 𝑓𝑓 / 𝑈𝑈 ∞ = 0. 0 6 , 0. 0 8, a n d 0. 1 0 wit h 𝑅𝑅 𝑅𝑅 = 2 0 ,0 0 0 − 3 0 ,0 0 0 , wit h a pit c hi n g a m plit u d e of 𝜃𝜃 0 = 7 0 ∘ , a n d a h e a vi n g a m plit u d e of ℎ 0 = 0. 5 𝑓𝑓 . It i s f o u n d t h at l e a di n g e d g e m oti o n s t h at r e d u c e t h e eff e cti v e a n gl e of att a c k e arl y t h e str o k e w or k t o b ot h i n cr e a s e t h e lift f or c e s a s w ell a s s hift t h e p e a k lift f or c e l at er i n t h e fl a p pi n g str o k e. L e a di n g e d g e m oti o n s i n w hi c h t h e eff e cti v e a n gl e of att a c k i s i n cr e a s e d e arl y i n t h e str o k e s h o w d e cr e a s e d p erf or m a n c e. I n a d diti o n a di s cr et e v ort e x m o d el wit h v ort e x s h e d di n g at t h e l e a di n g e d g e i s i m pl e m e nt f or t h e m oti o n s st u di e d; it i s f o u n d t h at t h e m e c h a ni s m f or s h e d di n g at t h e l e a di n g e d g e i s n ot a d e q u at e f or t hi s p ar a m et er r a n g e a n d t h e m o d el c o n si st e ntl y o v er pr e di ct s t h e a er o d y n a mi c f or c e s. 
    more » « less
  5. It has been demonstrated in numerous previous studies that Android and its underlying Linux operating systems do not properly isolate mobile apps to prevent cross-app side- channel attacks. Cross-app information leakage enables malicious Android apps to infer sensitive user data (e.g., passwords), or private user information (e.g., identity or location) without requiring specific permissions. Nevertheless, no prior work has ever studied these side-channel attacks on iOS-based mobile devices. One reason is that iOS does not implement procfs— the most popular side-channel attack vector; hence the previously known attacks are not feasible. In this paper, we present the first study of OS-level side-channel attacks on iOS. Specifically, we identified several new side-channel attack vectors (i.e., iOS APIs that enable cross-app information leakage); developed machine learning frameworks (i.e., classification and pattern matching) that combine multiple attack vectors to improve the accuracy of the inference attacks; demonstrated three categories of attacks that exploit these vectors and frameworks to exfiltrate sensitive user information. We have reported our findings to Apple and proposed mitigations to the attacks. Apple has incorporated some of our suggested countermeasures into iOS 11 and MacOS High Sierra 10.13 and later versions. 
    more » « less