skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Explicit Time and Space Efficient Encoders Exist Only with Random Access
We give the first explicit constant rate, constant relative distance, linear codes with an encoder that runs in time n^{1 + o(1)} and space polylog(n) provided random access to the message. Prior to this work, the only such codes were non-explicit, for instance repeat accumulate codes [Divsalar et al., 1998] and the codes described in [Gál et al., 2013]. To construct our codes, we also give explicit, efficiently invertible, lossless condensers with constant entropy gap and polylogarithmic seed length. In contrast to encoders with random access to the message, we show that encoders with sequential access to the message can not run in almost linear time and polylogarithmic space. Our notion of sequential access is much stronger than streaming access.  more » « less
Award ID(s):
2200956
PAR ID:
10585119
Author(s) / Creator(s):
;
Editor(s):
Santhanam, Rahul
Publisher / Repository:
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Date Published:
Volume:
300
ISSN:
1868-8969
ISBN:
978-3-95977-331-7
Page Range / eLocation ID:
5:1-5:54
Subject(s) / Keyword(s):
Time-Space Trade Offs Error Correcting Codes Encoders Explicit Constructions Streaming Lower Bounds Sequential Access Time-Space Lower Bounds Lossless Condensers Invertible Condensers Condensers Theory of computation → Error-correcting codes Theory of computation → Expander graphs and randomness extractors Theory of computation → Streaming models Theory of computation → Lower bounds and information complexity
Format(s):
Medium: X Size: 54 pages; 1089147 bytes Other: application/pdf
Size(s):
54 pages 1089147 bytes
Right(s):
Creative Commons Attribution 4.0 International license; info:eu-repo/semantics/openAccess
Sponsoring Org:
National Science Foundation
More Like this
  1. Etessami, Kousha; Feige, Uriel; Puppis, Gabriele (Ed.)
    This work continues the study of linear error correcting codes against adversarial insertion deletion errors (insdel errors). Previously, the work of Cheng, Guruswami, Haeupler, and Li [Kuan Cheng et al., 2021] showed the existence of asymptotically good linear insdel codes that can correct arbitrarily close to 1 fraction of errors over some constant size alphabet, or achieve rate arbitrarily close to 1/2 even over the binary alphabet. As shown in [Kuan Cheng et al., 2021], these bounds are also the best possible. However, known explicit constructions in [Kuan Cheng et al., 2021], and subsequent improved constructions by Con, Shpilka, and Tamo [Con et al., 2022] all fall short of meeting these bounds. Over any constant size alphabet, they can only achieve rate < 1/8 or correct < 1/4 fraction of errors; over the binary alphabet, they can only achieve rate < 1/1216 or correct < 1/54 fraction of errors. Apparently, previous techniques face inherent barriers to achieve rate better than 1/4 or correct more than 1/2 fraction of errors. In this work we give new constructions of such codes that meet these bounds, namely, asymptotically good linear insdel codes that can correct arbitrarily close to 1 fraction of errors over some constant size alphabet, and binary asymptotically good linear insdel codes that can achieve rate arbitrarily close to 1/2. All our constructions are efficiently encodable and decodable. Our constructions are based on a novel approach of code concatenation, which embeds the index information implicitly into codewords. This significantly differs from previous techniques and may be of independent interest. Finally, we also prove the existence of linear concatenated insdel codes with parameters that match random linear codes, and propose a conjecture about linear insdel codes. 
    more » « less
  2. Convolutional codes are widely used in many applications. The encoders can be implemented with a simple circuit. Decoding is often accomplished by the Viterbi algorithm or the maximum a-posteriori decoder of Bahl et al. These algorithms are sequential in nature, requiring a decoding time proportional to the message length. For low latency applications this this latency might be problematic. This paper introduces a low-latency decoder for tail-biting convolutional codes TBCCs that processes multiple trellis stages in parallel. The new decoder is designed for hardware with parallel processing capabilities. The overall decoding latency is proportional to the log of the message length. The new decoding architecture is modified into a list decoder, and the list decoding performance can be enhanced by exploiting linearity to expand the search space. Certain modifications to standard TBCCs are supported by the new architecture and improve frame error rate performance. 
    more » « less
  3. Chan, Timothy; Fischer, Johannes; Iacono, John; Herman, Grzegorz (Ed.)
    The maximum coverage problem is to select k sets, from a collection of m sets, such that the cardinality of their union, in a universe of size n, is maximized. We consider (1-1/e-ε)-approximation algorithms for this NP-hard problem in three standard data stream models. 1) Dynamic Model. The stream consists of a sequence of sets being inserted and deleted. Our multi-pass algorithm uses ε^{-2} k ⋅ polylog(n,m) space. The best previous result (Assadi and Khanna, SODA 2018) used (n +ε^{-4} k) polylog(n,m) space. While both algorithms use O(ε^{-1} log m) passes, our analysis shows that, when ε ≤ 1/log log m, it is possible to reduce the number of passes by a 1/log log m factor without incurring additional space. 2) Random Order Model. In this model, there are no deletions, and the sets forming the instance are uniformly randomly permuted to form the input stream. We show that a single pass and k polylog(n,m) space suffices for arbitrary small constant ε. The best previous result, by Warneke et al. (ESA 2023), used k² polylog(n,m) space. 3) Insert-Only Model. Lastly, our results, along with numerous previous results, use a sub-sampling technique introduced by McGregor and Vu (ICDT 2017) to sparsify the input instance. We explain how this technique and others used in the paper can be implemented such that the amortized update time of our algorithm is polylogarithmic. This also implies an improvement of the state-of-the-art insert only algorithms in terms of the update time: polylog(m,n) update time suffices, whereas the best previous result by Jaud et al. (SEA 2023) required update time that was linear in k. 
    more » « less
  4. A conditional sampling oracle for a probability distribution D returns samples from the conditional distribution of D restricted to a specified subset of the domain. A recent line of work (Chakraborty et al. 2013 and Cannone et al. 2014) has shown that having access to such a conditional sampling oracle requires only polylogarithmic or even constant number of samples to solve distribution testing problems like identity and uniformity. This significantly improves over the standard sampling model where polynomially many samples are necessary. Inspired by these results, we introduce a computational model based on conditional sampling to develop sublinear algorithms with exponentially faster runtimes compared to standard sublinear algorithms. We focus on geometric optimization problems over points in high dimensional Euclidean space. Access to these points is provided via a conditional sampling oracle that takes as input a succinct representation of a subset of the domain and outputs a uniformly random point in that subset. We study two well studied problems: k-means clustering and estimating the weight of the minimum spanning tree. In contrast to prior algorithms for the classic model, our algorithms have time, space and sample complexity that is polynomial in the dimension and polylogarithmic in the number of points. Finally, we comment on the applicability of the model and compare with existing ones like streaming, parallel and distributed computational models. 
    more » « less
  5. This paper presents new achievability bounds on the maximal achievable rate of variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC) at a fixed message size M=2^k . We provide bounds for two cases: The first case considers VLSF codes with possibly infinite decoding times and zero error probability. The second case limits the maximum (finite) number of decoding times and specifies a maximum tolerable probability of error. Both new achievability bounds are proved by constructing a new VLSF code that employs systematic transmission of the first k message bits followed by random linear fountain parity bits decoded with a rank decoder. For VLSF codes with infinite decoding times, our new bound outperforms the state-of-the-art result for BEC by Devassy et al. in 2016. We show that the backoff from capacity reduces to zero as the erasure probability decreases, thus giving a negative answer to the open question Devassy et al. posed on whether the 23.4% backoff to capacity at k=3 is fundamental to all BECs. For VLSF codes with finite decoding times, numerical evaluations show that the systematic transmission followed by random linear fountain coding performs better than random linear coding in terms of achievable rates. 
    more » « less