skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Exponentially Simpler Network Rate Regions
Determining the rate region of a network is of great importance in the research area of network coding. Lots of attempts have been made and significant progress has been achieved over the last decade on this topic. Although these researches provide us with multiple ways of calculating the outer or inner bounds of rate region, the sheer complexity of the problem, which involves expressing and projecting a very high dimensional polyhedra, makes it computationally infeasible beyond networks with 10s of edges. Aimed at reducing the complexity of the rate region calculation, in this paper a new theorem that implicitly determines the rate region of a network is proved and a corresponding systematic way of applying the theorem to calculate explicitly the outer bounds to a rate region is proposed. Compared with the traditional method, the proposed method has the potential to calculate the true rate region via the projection of simpler polyhedra that has exponentially less dimensions and is characterized by exponentially less facets.  more » « less
Award ID(s):
1812965
PAR ID:
10316179
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2021 IEEE International Symposium on Information Theory (ISIT)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Motivated by the applications for low-delay communication networks, the finite-length analysis, or channel dispersion identification, of the multi-user channel is very important. Recent studies also incorporate the effects of feedback in point-to-point and common-message broadcast channels (BCs). However, with private messages and feedback, finite-length results for BCs are much more scarce. Though it is known that feedback can strictly enlarge the capacity, the ultimate feedback capacity regions remain unknown for even some classical channels including Gaussian BCs. In this work, we study the two-user broadcast packet erasure channel (PEC) with causal feedback, which is one of the cleanest feedback capacity results and the capacity region can be achieved by elegant linear network coding (LNC). We first derive a new finite-length outer bound for any LNCs and then accompanying inner bound by analyzing a three-phase LNC. For the outer-bound, we adopt a linear-space-based framework, which can successfully find the LNC capacity. However, naively applying this method in finite-length regime will result in a loose outer bound. Thus a new bounding technique based on carefully labelling each time slot according to the type of LNC transmitted is proposed. Simulation results show that the sum-rate gap between our inner and outer bounds is within 0.02 bits/channel use. Asymptotic analysis also shows that our bounds bracket the channel dispersion of LNC feedback capacity for broadcast PEC to within a factor of Q-l (E/2)/Q-l (E). 
    more » « less
  2. A new method is presented, consisting of exclusively simple linear algebra computations, for computing the linear programming Shannon outer bound to the network coding capacity region of a directed hypergraph network. This linear algebraic formulation enables a new upper bound on the worst case complexity of computing the Shannon outer bound to a network coding capacity region to be determined. 
    more » « less
  3. Banach's fixed point theorem for contraction maps has been widely used to analyze the convergence of iterative methods in non-convex problems. It is a common experience, however, that iterative maps fail to be globally contracting under the natural metric in their domain, making the applicability of Banach's theorem limited. We explore how generally we can apply Banach's fixed point theorem to establish the convergence of iterative methods when pairing it with carefully designed metrics. Our first result is a strong converse of Banach's theorem, showing that it is a universal analysis tool for establishing global convergence of iterative methods to unique fixed points, and for bounding their convergence rate. In other words, we show that, whenever an iterative map globally converges to a unique fixed point, there exists a metric under which the iterative map is contracting and which can be used to bound the number of iterations until convergence. We illustrate our approach in the widely used power method, providing a new way of bounding its convergence rate through contraction arguments. We next consider the computational complexity of Banach's fixed point theorem. Making the proof of our converse theorem constructive, we show that computing a fixed point whose existence is guaranteed by Banach's fixed point theorem is CLS-complete. We thus provide the first natural complete problem for the class CLS, which was defined in [Daskalakis, Papadimitriou 2011] to capture the complexity of problems such as P-matrix LCP, computing KKT-points, and finding mixed Nash equilibria in congestion and network coordination games. 
    more » « less
  4. Banach's fixed point theorem for contraction maps has been widely used to analyze the convergence of iterative methods in non-convex problems. It is a common experience, however, that iterative maps fail to be globally contracting under the natural metric in their domain, making the applicability of Banach's theorem limited. We explore how generally we can apply Banach's fixed point theorem to establish the convergence of iterative methods when pairing it with carefully designed metrics. Our first result is a strong converse of Banach's theorem, showing that it is a universal analysis tool for establishing global convergence of iterative methods to unique fixed points, and for bounding their convergence rate. In other words, we show that, whenever an iterative map globally converges to a unique fixed point, there exists a metric under which the iterative map is contracting and which can be used to bound the number of iterations until convergence. We illustrate our approach in the widely used power method, providing a new way of bounding its convergence rate through contraction arguments. We next consider the computational complexity of Banach's fixed point theorem. Making the proof of our converse theorem constructive, we show that computing a fixed point whose existence is guaranteed by Banach's fixed point theorem is CLS-complete. We thus provide the first natural complete problem for the class CLS, which was defined in [DP11] to capture the complexity of problems such as P-matrix LCP, computing KKT-points, and finding mixed Nash equilibria in congestion and network coordination games. 
    more » « less
  5. An expurgating linear function (ELF) is an outer code that disallows low-weight codewords of the inner code. ELFs can be designed either to maximize the minimum distance or to minimize the codeword error rate (CER) of the expurgated code. A list-decoding sieve can efficiently identify ELFs that maximize the minimum distance of the expurgated code. For convolutional inner codes, this paper provides analytical distance spectrum union (DSU) bounds on the CER of the concatenated code. For short codeword lengths, ELFs transform a good inner code into a great concatenated code. For a constant message size of K = 64 bits or constant codeword blocklength of N = 152 bits, an ELF can reduce the gap at CER 10−6 between the DSU and the random-coding union (RCU) bounds from over 1 dB for the inner code alone to 0.23 dB for the concatenated code. The DSU bounds can also characterize puncturing that mitigates the rate overhead of the ELF while maintaining the DSU-to-RCU gap. List Viterbi decoding guided by the ELF achieves maximum likelihood (ML) decoding of the concatenated code with a sufficiently large list size. The rate-K/(K+m) ELF outer code reduces rate and list decoding increases decoder complexity. As SNR increases, the average list size converges to 1 and average complexity is similar to Viterbi decoding on the trellis of the inner code. For rare large-magnitude noise events, which occur less often than the FER of the inner code, a deep search in the list finds the ML codeword. 
    more » « less