F or c e d at a f or a fl a p pi n g f oil e n er g y h ar v e st er wit h a cti v e l e a di n g e d g e m oti o n o p er ati n g i n t h e l o w r e d u c e d fr e q u e n c y r a n g e i s c oll e ct e d t o d et er mi n e h o w l e a di n g e d g e m oti o n aff e ct s e n er g y h ar v e sti n g p erf or m a n c e. T h e f oil pi v ot s a b o ut t h e mi dc h or d a n d o p er at e s i n t h e l o w r e d u c e d fr e q u e n c y r a n g e of 𝑓𝑓 𝑓𝑓 / 𝑈𝑈 ∞ = 0. 0 6 , 0. 0 8, a n d 0. 1 0 wit h 𝑅𝑅 𝑅𝑅 = 2 0 ,0 0 0 − 3 0 ,0 0 0 , wit h a pit c hi n g a m plit u d e of 𝜃𝜃 0 = 7 0 ∘ , a n d a h e a vi n g a m plit u d e of ℎ 0 = 0. 5 𝑓𝑓 . It i s f o u n d t h at l e a di n g e d g e m oti o n s t h at r e d u c e t h e eff e cti v e a n gl e of att a c k e arl y t h e str o k e w or k t o b ot h i n cr e a s e t h e lift f or c e s a s w ell a s s hift t h e p e a k lift f or c e l at er i n t h e fl a p pi n g str o k e. L e a di n g e d g e m oti o n s i n w hi c h t h e eff e cti v e a n gl e of att a c k i s i n cr e a s e d e arl y i n t h e str o k e s h o w d e cr e a s e d p erf or m a n c e. I n a d diti o n a di s cr et e v ort e x m o d el wit h v ort e x s h e d di n g at t h e l e a di n g e d g e i s i m pl e m e nt f or t h e m oti o n s st u di e d; it i s f o u n d t h at t h e m e c h a ni s m f or s h e d di n g at t h e l e a di n g e d g e i s n ot a d e q u at e f or t hi s p ar a m et er r a n g e a n d t h e m o d el c o n si st e ntl y o v er pr e di ct s t h e a er o d y n a mi c f or c e s.
more »
« less
Stationary Syndrome Decoding for Improved PCGs
Syndrome decoding (SD), and equivalently Learning Parity with Noise (LPN), is a fundamental problem in cryptography, which states that for a field F, some compressing public matrix G ∈ F^k×n ,and a secret sparse vector e ∈ F^n sampled from some noise distribution, G e is indistinguishable from uniform. Recently, the SD has gained significant interest due to its use in pseudorandom correlation generators (PCGs). In pursuit of better efficiency, we propose a new assumption called Stationary Syndrome Decoding (SSD). In SSD,weconsider q correlated noise vectors e1,... , eq ∈ F^n and associated instances G1 e1,..., Gq eq where the noise vectors are restricted to having non-zeros in the same small subset of t positions L ⊂ [n]. That is, for all i ∈ L, ej,i is uniformly random, while for all other i, ej,i =0. Although naively reusing the noise vector renders SD and LPN insecure via simple Gaussian elimination, we observe known attacks do not extend to our correlated noise. We show SSD is unconditionally secure against so-called linear attacks, e.g., advanced information set decoding and representation techniques (Esser and Santini, Crypto 2024). We further adapt the state-of-the-art nonlinear attack (Briaud and Øygarden, Eurocrypt 2023) to SSD and demonstrate both theoretically and exper-imentally resistance to the attack. We apply SSD to PCGs to amortize the cost of noise generation pro-tocol. For OT and VOLE generation, each instance requires O(t)com-munication instead of O(t log n). For suggested parameters, we observe a1.5× improvement in the running time or between 6 and 18× reduc-tion in communication. For Beaver triple generation using Ring LPN, our techniques have the potential for substantial amortization due to the high concrete overhead of the Ring LPN noise generation.
more »
« less
- Award ID(s):
- 2217070
- PAR ID:
- 10635320
- Publisher / Repository:
- Springer Nature Switzerland
- Date Published:
- Page Range / eLocation ID:
- 284 to 317
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A s a c om pl e men t t o da ta d edupli cat ion , de lta c om p ress i on fu r- t he r r edu c es t h e dat a vo l u m e by c o m pr e ssi n g n o n - dup li c a t e d ata chunk s r e l a t iv e to t h e i r s i m il a r chunk s (bas e chunk s). H ow ever, ex is t i n g p o s t - d e dup li c a t i o n d e l t a c o m pr e ssi o n a p- p ro a ches fo r bac kup s t or ag e e i t h e r su ffe r f ro m t h e l ow s i m - il a r i t y b e twee n m any de te c ted c hun ks o r m i ss so me po t e n - t i a l s i m il a r c hunks , o r su ffer f r om l ow (ba ckup and r es t ore ) th r oug hpu t du e t o extr a I/ Os f or r e a d i n g b a se c hun ks o r a dd a dd i t i on a l s e r v i c e - d i s r up t ive op e r a t i on s to b a ck up s ys t em s. I n t h i s pa p e r, w e pr opo se L oop D e l t a t o a dd ress the above - m e n t i on e d prob l e m s by an e nha nced em b e ddi n g d e l t a c o m p - r e ss i on sc heme i n d e dup li c a t i on i n a non - i n t ru s ive way. T h e e nha nce d d elt a c o mpr ess ion s che m e co m b in e s f our key t e c h - ni qu e s : (1) du a l - l o c a li t y - b a s e d s i m il a r i t y t r a c k i n g to d e t ect po t e n t i a l si m il a r chun k s b y e x p l o i t i n g both l o g i c a l and ph y - s i c a l l o c a li t y, ( 2 ) l o c a li t y - a wa r e pr e f e t c h i n g to pr efe tc h ba se c hun ks to a vo i d ex t ra I/ Os fo r r e a d i n g ba s e chun ks on t h e w r i t e p at h , (3) c a che -aware fil t e r to avo i d ext r a I/Os f or b a se c hunk s on t he read p at h, a nd (4) i nver sed de l ta co mpressi on t o perf orm de lt a co mpress i o n fo r d at a chunk s t hat a re o th e r wi se f o r b i dd e n to s er ve as ba se c hunk s by r ew r i t i n g t e c hn i qu e s d e s i g n e d t o i m p r ove r es t o re pe rf o rma nc e. E x p e r i m e n t a l re su lts indi ca te t hat L oop D e l t a i ncr ea se s t he c o m pr e ss i o n r a t i o by 1 .2410 .97 t i m e s on t op of d e dup li c a - t i on , wi t hou t no t a b l y a ffe c t i n g th e ba ck up th rou ghpu t, a nd i t i m p r ove s t he res to re p er fo r m an ce b y 1.23.57 t i m emore » « less
-
A gr e at d e al of i nt er e st s urr o u n d s t h e u s e of tr a n s cr a ni al dir e ct c urr e nt sti m ul ati o n (t D C S) t o a u g m e nt c o g niti v e tr ai ni n g. H o w e v er, eff e ct s ar e i n c o n si st e nt a cr o s s st u di e s, a n d m et aa n al yti c e vi d e n c e i s mi x e d, e s p e ci all y f o r h e alt h y, y o u n g a d ult s. O n e m aj or s o ur c e of t hi s i n c o n si st e n c y i s i n di vi d u al diff er e n c e s a m o n g t h e p arti ci p a nt s, b ut t h e s e diff er e n c e s ar e r ar el y e x a mi n e d i n t h e c o nt e xt of c o m bi n e d tr ai ni n g/ sti m ul ati o n st u di e s. I n a d diti o n, it i s u n cl e ar h o w l o n g t h e eff e ct s of sti m ul ati o n l a st, e v e n i n s u c c e s sf ul i nt er v e nti o n s. S o m e st u di e s m a k e u s e of f oll o w- u p a s s e s s m e nt s, b ut v er y f e w h a v e m e a s ur e d p erf or m a n c e m or e t h a n a f e w m o nt hs aft er a n i nt er v e nti o n. H er e, w e utili z e d d at a fr o m a pr e vi o u s st u d y of t D C S a n d c o g niti v e tr ai ni n g [ A u, J., K at z, B., B u s c h k u e hl, M., B u n arj o, K., S e n g er, T., Z a b el, C., et al. E n h a n ci n g w or ki n g m e m or y tr ai ni n g wit h tr a n scr a ni al dir e ct c urr e nt sti m ul ati o n. J o u r n al of C o g niti v e N e u r os ci e n c e, 2 8, 1 4 1 9 – 1 4 3 2, 2 0 1 6] i n w hi c h p arti ci p a nts tr ai n e d o n a w or ki n g m e m or y t as k o v er 7 d a y s w hil e r e c ei vi n g a cti v e or s h a m t D C S. A n e w, l o n g er-t er m f oll o w- u p t o a ss es s l at er p erf or m a n c e w a s c o n d u ct e d, a n d a d diti o n al p arti ci p a nt s w er e a d d e d s o t h at t h e s h a m c o n diti o n w a s b ett er p o w er e d. W e a s s e s s e d b a s eli n e c o g niti v e a bilit y, g e n d er, tr ai ni n g sit e, a n d m oti v ati o n l e v el a n d f o u n d si g nifi c a nt i nt er a cti o ns b et w e e n b ot h b as eli n e a bilit y a n d m oti v ati o n wit h c o n diti o n ( a cti v e or s h a m) i n m o d els pr e di cti n g tr ai ni n g g ai n. I n a d diti o n, t h e i m pr o v e m e nt s i n t h e a cti v e c o nditi o n v er s u s s h a m c o n diti o n a p p e ar t o b e st a bl e e v e n a s l o n g a s a y e ar aft er t h e ori gi n al i nt er v e nti o n. ■more » « less
-
null (Ed.)A s m or e e d u c at or s i nt e gr at e t h eir c urri c ul a wit h o nli n e l e ar ni n g, it i s e a si er t o cr o w d s o ur c e c o nt e nt fr o m t h e m. Cr o w ds o ur c e d t ut ori n g h a s b e e n pr o v e n t o r eli a bl y i n cr e a s e st u d e nt s’ n e xt pr o bl e m c orr e ct n e s s. I n t hi s w or k, w e c o n fir m e d t h e fi n di n g s of a pr e vi o u s st u d y i n t hi s ar e a, wit h str o n g er c o n fi d e n c e m ar gi n s t h a n pr e vi o u sl y, a n d r e v e al e d t h at o nl y a p orti o n of cr o w d s o ur c e d c o nt e nt cr e at or s h a d a r eli a bl e b e n e fit t o st ud e nt s. F urt h er m or e, t hi s w or k pr o vi d e s a m et h o d t o r a n k c o nt e nt cr e at or s r el ati v e t o e a c h ot h er, w hi c h w a s u s e d t o d et er mi n e w hi c h c o nt e nt cr e at or s w er e m o st eff e cti v e o v er all, a n d w hi c h c o nt e nt cr e at or s w er e m o st eff e cti v e f or s p e ci fi c gr o u p s of st u d e nt s. W h e n e x pl ori n g d at a fr o m Te a c h er A SSI S T, a f e at ur e wit hi n t h e A S SI S T m e nt s l e ar ni n g pl atf or m t h at cr o w d s o ur c e s t ut ori n g fr o m t e a c h er s, w e f o u n d t h at w hil e o v erall t hi s pr o gr a m pr o vi d e s a b e n e fit t o st u d e nt s, s o m e t e a c h er s cr e at e d m or e eff e cti v e c o nt e nt t h a n ot h er s. D e s pit e t hi s fi n di n g, w e di d n ot fi n d e vi d e n c e t h at t h e eff e cti v e n e s s of c o nt e nt r eli a bl y v ari e d b y st u d e nt k n o wl e d g e-l e v el, s u g g e sti n g t h at t h e c o nt e nt i s u nli k el y s uit a bl e f or p er s o n ali zi n g i n str u cti o n b a s e d o n st u d e nt k n o wl e d g e al o n e. T h e s e fi n di n g s ar e pr o mi si n g f or t h e f ut ur e of cr o w d s o ur c e d t ut ori n g a s t h e y h el p pr o vi d e a f o u n d ati o n f or a s s e s si n g t h e q u alit y of cr o w d s o ur c e d c o nt e nt a n d i n v e sti g ati n g c o nt e nt f or o p p ort u niti e s t o p er s o n ali z e st u d e nt s’ e d u c ati o n.more » « less
-
We generalize Hermite interpolation with error correction, which is the methodology for multiplicity algebraic error correction codes, to Hermite interpolation of a rational function over a field K from function and function derivative values. We present an interpolation algorithm that can locate and correct <= E errors at distinct arguments y in K where at least one of the values or values of a derivative is incorrect. The upper bound E for the number of such y is input. Our algorithm sufficiently oversamples the rational function to guarantee a unique interpolant. We sample (f/g)^(j)(y[i]) for 0 <= j <= L[i], 1 <= i <= n, y[i] distinct, where (f/g)^(j) is the j-th derivative of the rational function f/g, f, g in K[x], GCD(f,g)=1, g <= 0, and where N = (L[1]+1)+...+(L[n]+1) >= C + D + 1 + 2(L[1]+1) + ... + 2(L[E]+1) where C is an upper bound for deg(f) and D an upper bound for deg(g), which are input to our algorithm. The arguments y[i] can be poles, which is truly or falsely indicated by a function value infinity with the corresponding L[i]=0. Our results remain valid for fields K of characteristic >= 1 + max L[i]. Our algorithm has the same asymptotic arithmetic complexity as that for classical Hermite interpolation, namely soft-O(N). For polynomials, that is, g=1, and a uniform derivative profile L[1] = ... = L[n], our algorithm specializes to the univariate multiplicity code decoder that is based on the 1986 Welch-Berlekamp algorithm.more » « less
An official website of the United States government

