This work addresses the cooperation facilitator (CF) model, in which network nodes coordinate through a rate limited communication device. For multiple-access channel (MAC) encoders, the CF model is known to show significant rate benefits, even when the rate of cooperation is negligible. Specifically, the benefit in MAC sum-rate, as a function of the cooperation rate C_{CF}, sometimes has an infinite slope at C_{CF} = 0 when the CF enables transmitter dependence where none was possible otherwise. This work asks whether cooperation through a CF can yield similar infinite-slope benefits when dependence among MAC transmitters has no benefit or when it can be established without the help of the CF. Specifically, this work studies the CF model when applied to relay nodes of a single-source, single-terminal, diamond network comprising a broadcast channel followed by a MAC. In the relay channel with orthogonal receiver components, careful generalization of the partial-decode-forward/compress-forward lower bound to the CF model yields sufficient conditions for an infinite-slope benefit. Additional results include derivation of a family of diamond networks for which the infinite-slope rate-benefit derives directly from the properties of the corresponding MAC studied in isolation.
more »
« less
Every Bit Counts: Second-Order Analysis of Cooperation in the Multiple-Access Channel
The work at hand presents a finite-blocklength analysis of the multiple access channel (MAC) sum-rate under the cooperation facilitator (CF) model. The CF model, in which independent encoders coordinate through an intermediary node, is known to show significant rate benefits, even when the rate of cooperation is limited. We continue this line of study for cooperation rates which are sub-linear in the blocklength n. Roughly speaking, our results show that if the facilitator transmits log K bits, then there is a sum-rate benefit of order √log K/n compared to the best-known achievable rate. This result extends across a wide range of K: even a single bit of cooperation is shown to provide a sum-rate benefit of order 1/√n.
more »
« less
- Award ID(s):
- 1908725
- PAR ID:
- 10296640
- Date Published:
- Journal Name:
- 2021 IEEE International Symposium on Information Theory (ISIT)
- Page Range / eLocation ID:
- 2214 to 2219
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A code C ∶ {0,1}k → {0,1}n is a q-locally decodable code (q-LDC) if one can recover any chosen bit bi of the message b ∈ {0,1}k with good confidence by randomly querying the encoding x = C(b) on at most q coordinates. Existing constructions of 2-LDCs achieve n = exp(O(k)), and lower bounds show that this is in fact tight. However, when q = 3, far less is known: the best constructions achieve n = exp(ko(1)), while the best known results only show a quadratic lower bound n ≥ Ω(k2/log(k)) on the blocklength. In this paper, we prove a near-cubic lower bound of n ≥ Ω(k3/log6(k)) on the blocklength of 3-query LDCs. This improves on the best known prior works by a polynomial factor in k. Our proof relies on a new connection between LDCs and refuting constraint satisfaction problems with limited randomness. Our quantitative improvement builds on the new techniques for refuting semirandom instances of CSPs and, in particular, relies on bounding the spectral norm of appropriate Kikuchi matrices.more » « less
-
Establishing the complexity of {\em Bounded Distance Decoding} for Reed-Solomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NP-hard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NP-hardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for Reed-Solomon codes of length $$N$$ and dimension $K=O(N)$, we show that it is NP-hard to decode more than $$ N-K- c\frac{\log N}{\log\log N}$$ errors (with $c>0$ an absolute constant). Moreover, we show that the problem is NP-hard under quasipolynomial-time reductions for an error amount $$> N-K- c\log{N}$$ (with $c>0$$ an absolute constant). An alternative natural reformulation of the Bounded Distance Decoding problem for Reed-Solomon codes is as a {\em Polynomial Reconstruction} problem. In this view, our results show that it is NP-hard to decide whether there exists a degree $$K$$ polynomial passing through $$K+ c\frac{\log N}{\log\log N}$$ points from a given set of points $$(a_1, b_1), (a_2, b_2)\ldots, (a_N, b_N)$$. Furthermore, it is NP-hard under quasipolynomial-time reductions to decide whether there is a degree $$K$$ polynomial passing through $$K+c\log{N}$$ many points. These results follow from the NP-hardness of a generalization of the classical Subset Sum problem to higher moments, called {\em Moments Subset Sum}, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the well-studied Prouhet-Tarry-Escott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the Prouhet-Tarry-Escott problem deserves further study in the theoretical computer science community.more » « less
-
Guruswami, Venkatesan (Ed.)We explicitly construct the first nontrivial extractors for degree d ≥ 2 polynomial sources over 𝔽₂. Our extractor requires min-entropy k ≥ n - (√{log n})/((log log n / d)^{d/2}). Previously, no constructions were known, even for min-entropy k ≥ n-1. A key ingredient in our construction is an input reduction lemma, which allows us to assume that any polynomial source with min-entropy k can be generated by O(k) uniformly random bits. We also provide strong formal evidence that polynomial sources are unusually challenging to extract from, by showing that even our most powerful general purpose extractors cannot handle polynomial sources with min-entropy below k ≥ n-o(n). In more detail, we show that sumset extractors cannot even disperse from degree 2 polynomial sources with min-entropy k ≥ n-O(n/log log n). In fact, this impossibility result even holds for a more specialized family of sources that we introduce, called polynomial non-oblivious bit-fixing (NOBF) sources. Polynomial NOBF sources are a natural new family of algebraic sources that lie at the intersection of polynomial and variety sources, and thus our impossibility result applies to both of these classical settings. This is especially surprising, since we do have variety extractors that slightly beat this barrier - implying that sumset extractors are not a panacea in the world of seedless extraction.more » « less
-
We study theta-joins in general and join predicates with conjunctions and disjunctions of inequalities in particular, focusing on ranked enumeration where the answers are returned incrementally in an order dictated by a given ranking function. Our approach achieves strong time and space complexity properties: with n denoting the number of tuples in the database, we guarantee for acyclic full join queries with inequality conditions that for every value of k , the k top-ranked answers are returned in O ( n polylog n + k log k ) time. This is within a polylogarithmic factor of O ( n + k log k ), i.e., the best known complexity for equi-joins, and even of O ( n + k ), i.e., the time it takes to look at the input and return k answers in any order. Our guarantees extend to join queries with selections and many types of projections (namely those called "free-connex" queries and those that use bag semantics). Remarkably, they hold even when the number of join results is n ℓ for a join of ℓ relations. The key ingredient is a novel O ( n polylog n )-size factorized representation of the query output , which is constructed on-the-fly for a given query and database. In addition to providing the first nontrivial theoretical guarantees beyond equi-joins, we show in an experimental study that our ranked-enumeration approach is also memory-efficient and fast in practice, beating the running time of state-of-the-art database systems by orders of magnitude.more » « less