skip to main content


This content will become publicly available on October 1, 2024

Title: Optimizing Finite-Blocklength Nested Linear Secrecy Codes: Using the Worst Code to Find the Best Code

Nested linear coding is a widely used technique in wireless communication systems for improving both security and reliability. Some parameters, such as the relative generalized Hamming weight and the relative dimension/length profile, can be used to characterize the performance of nested linear codes. In addition, the rank properties of generator and parity-check matrices can also precisely characterize their security performance. Despite this, finding optimal nested linear secrecy codes remains a challenge in the finite-blocklength regime, often requiring brute-force search methods. This paper investigates the properties of nested linear codes, introduces a new representation of the relative generalized Hamming weight, and proposes a novel method for finding the best nested linear secrecy code for the binary erasure wiretap channel by working from the worst nested linear secrecy code in the dual space. We demonstrate that our algorithm significantly outperforms the brute-force technique in terms of speed and efficiency.

 
more » « less
Award ID(s):
1910812
NSF-PAR ID:
10490044
Author(s) / Creator(s):
;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Entropy
Volume:
25
Issue:
10
ISSN:
1099-4300
Page Range / eLocation ID:
1456
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. —Consider a linear code defined as a mapping between vector spaces of dimensions k and n. Let β∗ denote the minimal (relative) weight among all images of input vectors of full Hamming weight k. Operationally, β∗ characterizes the threshold for adversarial (erasure) noise beyond which decoder is guaranteed to produce estimate of k-input with 100% symbol error rate (SER). This paper studies the relation between β∗ and δ, the minimum distance of the code, which gives the threshold for 0% SER. An optimal tradeoff between β∗ and δ is obtained (over large alphabets) and all linear codes achieving β∗ = 1 are classified: they are repetition-like. More generally, a design criteria is proposed for codes with favorable graceful degradation properties. As an example, it is shown that in an overdetermined system of n homogeneous linear equations in k variables (over a field) it is always possible to satisfy some k − 1 equations with non-zero assignments to every unknown, provided that any subset of k equations is linearly independent. This statement is true if and only if n ≥ 2k − 1. 
    more » « less
  2. Locally Decodable Codes (LDCs) are error-correcting codes for which individual message symbols can be quickly recovered despite errors in the codeword. LDCs for Hamming errors have been studied extensively in the past few decades, where a major goal is to understand the amount of redundancy that is necessary and sufficient to decode from large amounts of error, with small query complexity. Despite exciting progress, we still don't have satisfactory answers in several important parameter regimes. For example, in the case of 3-query LDCs, the gap between existing constructions and lower bounds is superpolynomial in the message length. In this work we study LDCs for insertion and deletion errors, called Insdel LDCs. Their study was initiated by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), who gave a reduction from Hamming LDCs to Insdel LDCs with a small blowup in the code parameters. On the other hand, the only known lower bounds for Insdel LDCs come from those for Hamming LDCs, thus there is no separation between them. Here we prove new, strong lower bounds for the existence of Insdel LDCs. In particular, we show that 2-query linear Insdel LDCs do not exist, and give an exponential lower bound for the length of all q-query Insdel LDCs with constant q. For q ≥ 3 our bounds are exponential in the existing lower bounds for Hamming LDCs. Furthermore, our exponential lower bounds continue to hold for adaptive decoders, and even in private-key settings where the encoder and decoder share secret randomness. This exhibits a strict separation between Hamming LDCs and Insdel LDCs. Our strong lower bounds also hold for the related notion of Insdel LCCs (except in the private-key setting), due to an analogue to the Insdel notions of a reduction from Hamming LCCs to LDCs. Our techniques are based on a delicate design and analysis of hard distributions of insertion and deletion errors, which depart significantly from typical techniques used in analyzing Hamming LDCs. 
    more » « less
  3. An expurgating linear function (ELF) is an outer code that disallows low-weight codewords of the inner code. ELFs can be designed either to maximize the minimum distance or to minimize the codeword error rate (CER) of the expurgated code. A list-decoding sieve can efficiently identify ELFs that maximize the minimum distance of the expurgated code. For convolutional inner codes, this paper provides analytical distance spectrum union (DSU) bounds on the CER of the concatenated code. For short codeword lengths, ELFs transform a good inner code into a great concatenated code. For a constant message size of K = 64 bits or constant codeword blocklength of N = 152 bits, an ELF can reduce the gap at CER 10−6 between the DSU and the random-coding union (RCU) bounds from over 1 dB for the inner code alone to 0.23 dB for the concatenated code. The DSU bounds can also characterize puncturing that mitigates the rate overhead of the ELF while maintaining the DSU-to-RCU gap. List Viterbi decoding guided by the ELF achieves maximum likelihood (ML) decoding of the concatenated code with a sufficiently large list size. The rate-K/(K+m) ELF outer code reduces rate and list decoding increases decoder complexity. As SNR increases, the average list size converges to 1 and average complexity is similar to Viterbi decoding on the trellis of the inner code. For rare large-magnitude noise events, which occur less often than the FER of the inner code, a deep search in the list finds the ML codeword. 
    more » « less
  4. null (Ed.)
    One of the key challenges arising when compilers vectorize loops for today’s SIMD-compatible architectures is to decide if vectorization or interleaving is beneficial. Then, the compiler has to determine the number of instructions to pack together and the interleaving level (stride). Compilers are designed today to use fixed-cost models that are based on heuristics to make vectorization decisions on loops. However, these models are unable to capture the data dependency, the computation graph, or the organization of instructions. Alternatively, software engineers often hand-write the vectorization factors of every loop. This, however, places a huge burden on them, since it requires prior experience and significantly increases the development time. In this work, we explore a novel approach for handling loop vectorization and propose an end-to-end solution using deep reinforcement learning (RL). We conjecture that deep RL can capture different instructions, dependencies, and data structures to enable learning a sophisticated model that can better predict the actual performance cost and determine the optimal vectorization factors. We develop an end-to-end framework, from code to vectorization, that integrates deep RL in the LLVM compiler. Our proposed framework takes benchmark codes as input and extracts the loop codes. These loop codes are then fed to a loop embedding generator that learns an embedding for these loops. Finally, the learned embeddings are used as input to a Deep RL agent, which dynamically determines the vectorization factors for all the loops. We further extend our framework to support random search, decision trees, supervised neural networks, and nearest-neighbor search. We evaluate our approaches against the currently used LLVM vectorizer and loop polyhedral optimization techniques. Our experiments show 1.29×−4.73× performance speedup compared to baseline and only 3% worse than the brute-force search on a wide range of benchmarks. 
    more » « less
  5. null (Ed.)
    Fuzzy extractors derive stable keys from noisy sources. They are a fundamental tool for key derivation from biometric sources. This work introduces a new construction, code offset in the exponent. This construction is the first reusable fuzzy extractor that simultaneously supports structured, low entropy distributions with correlated symbols and confidence information. These properties are specifically motivated by the most pertinent applications – key derivation from biometrics and physical unclonable functions – which typically demonstrate low entropy with additional statistical correlations and benefit from extractors that can leverage confidence information for efficiency. Code offset in the exponent is a group encoding of the code offset construction (Juels and Wattenberg, CCS 1999). A random codeword of a linear error-correcting code is used as a one-time pad for a sampled value from the noisy source. Rather than encoding this directly, code offset in the exponent encodes by exponentiation of a generator in a cryptographically strong group. We introduce and characterize a condition on noisy sources that directly translates to security of our construction in the generic group model. Our condition requires the inner product between the source distribution and all vectors in the null space of the code to be unpredictable. 
    more » « less