Linear computation broadcast (LCBC) refers to a setting with d dimensional data stored at a central server, where K users, each with some prior linear side-information, wish to compute various linear combinations of the data. For each computation instance, the data is represented as a d-dimensional vector with elements in a finite field Fpn where pn is a power of a prime. The computation is to be performed many times, and the goal is to determine the minimum amount of information per computation instance that must be broadcast to satisfy all the users. The reciprocal of the optimal broadcast cost per computation instance is the capacity of LCBC. The capacity is known for up to K = 3 users. Since LCBC includes index coding as a special case, large K settings of LCBC are at least as hard as the index coding problem. As such the general LCBC problem is beyond our reach and we do not pursue it. Instead of the general setting (all cases), by focusing on the generic setting (almost all cases) this work shows that the generic capacity of the symmetric LCBC (where every user has m′ dimensions of side-information and m dimensions of demand) for large number of users (K ≥ d suffices) is Cg = 1/∆g, where ∆g = min{ max{0, d − m' }, dm/(m+m′)}, is the broadcast cost that is both achievable and unbeatable asymptotically almost surely for large n, among all LCBC instances with the given parameters p, K, d, m, m′. Relative to baseline schemes of random coding or separate transmissions, Cg shows an extremal gain by a factor of K as a function of number of users, and by a factor of ≈ d/4 as a function of data dimensions, when optimized over remaining parameters. For arbitrary number of users, the generic capacity of the symmetric LCBC is characterized within a factor of 2.
more »
« less
The Capacity of 3 User Linear Computation Broadcast
The K User Linear Computation Broadcast (LCBC) problem is comprised of d dimensional data (from Fq), that is fully available to a central server, and K users, who require various linear computations of the data, and have prior knowledge of various linear functions of the data as side-information. The optimal broadcast cost is the minimum number of q-ary symbols to be broadcast by the server per computation instance, for every user to retrieve its desired computation. The reciprocal of the optimal broadcast cost is called the capacity. The main contribution of this paper is the exact capacity characterization for the K = 3 user LCBC for all cases, i.e., for arbitrary finite fields Fq, arbitrary data dimension d, and arbitrary linear side-informations and demands at each user. A remarkable aspect of the converse (impossibility result) is that unlike the 2 user LCBC whose capacity was determined previously, the entropic formulation (where the entropies of demands and side-informations are specified, but not their functional forms) is insufficient to obtain a tight converse for the 3 user LCBC. Instead, the converse exploits functional submodularity. Notable aspects of achievability include sufficiency of vector linear coding schemes, subspace decompositions that parallel those found previously by Yao Wang in degrees of freedom (DoF) studies of wireless broadcast networks, and efficiency tradeoffs that lead to a constrained waterfilling solution. Random coding arguments are invoked to resolve compatibility issues that arise as each user has a different view of the subspace decomposition, conditioned on its own side-information.
more »
« less
- Award ID(s):
- 2221379
- PAR ID:
- 10547476
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Transactions on Information Theory
- Volume:
- 70
- Issue:
- 6
- ISSN:
- 0018-9448
- Page Range / eLocation ID:
- 4414 to 4438
- Subject(s) / Keyword(s):
- Capacity, broadcast, index coding, coded computation
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Network function computation is an active topic in network coding, with much recent progress for linear (over a finite field) computations over broadcast (LCBC) and multiple access (LCMAC) channels. Over a quantum multiple access channel (QMAC) with quantum-entanglement shared among transmitters, the linear computation problem (LC-QMAC) is non-trivial even when the channel is noiseless, because of the challenge of optimally exploiting transmit-side entanglement through distributed coding. Given an arbitrary Fd linear function, f(W1,···,WK) = V1W1 +V2W2 +···+VKWK ∈Fm×1 d , the LC- QMAC problem seeks the optimal communication cost (minimum number of qudits ∆1,··· ,∆K that need to be sent by transmitters Alice1, Alice2, ···, AliceK, respectively, to the receiver, Bob, per computation instance) over a noise-free QMAC, when the input data streams Wk ∈Fmk ×1 d ,k ∈[K], originate at the corresponding transmitters, who share quantum entanglement in advance. As our main result, we fully solve this problem for K = 3 transmitters (K ≥4 settings remain open). Coding schemes based on the N-sum box protocol (along with time-sharing and batch-processing) are shown to be information theoretically optimal in all cases. As an example, in the symmetric case where rk(V1) = rk(V2) = rk(V3) ≜ r1, rk([V1,V2]) = rk([V2,V3]) = rk([V3,V1]) ≜ r2, and rk([V1,V2,V3]) ≜ r3 (rk= rank), the minimum total-download cost is max{1.5r1 + 0.75(r3−r2),r3}.more » « less
-
The optimal quantum communication cost of com- puting a classical sum of distributed sources is studied over a quantum erasure multiple access channel (QEMAC). K classical messages comprised of finite-field symbols are distributed across S servers, who also share quantum entanglement in advance. Each server s ∈ [S] manipulates its quantum subsystem Qs according to its own available classical messages and sends Qs to the receiver who then computes the sum of the messages based on a joint quantum measurement. The download cost from Server s ∈ [S] is the logarithm of the dimension of Qs. The rate R is defined as the number of instances of the sum computed at the receiver, divided by the total download cost from all the servers. The main focus is on the symmetric setting with K = S α messages where each message is replicated among a unique subset of α servers, and the answers from any β servers may be erased. If no entanglement is initially available to the receiver, then we show that the capacity (maximal rate) is precisely C = max ˚min ˚2(α−β) S−2β S , S , α−β S . The capacity with arbitrary levels of prior entanglement (∆0) between the S data-servers and the receiver is also characterized, by including an auxiliary server (Server 0) that has no classical data, so that the communication cost from Server 0 is a proxy for the amount of receiver-side entanglement that is available in advance. The challenge on the converse side resides in the optimal application of the weak monotonicity property, while the achievability combines ideas from classical network coding and treating qudits as classical dits, as well as new constructions based on the N-sum box abstraction that rely on absolutely maximally entangled quantum states.more » « less
-
null (Ed.)This paper studies the distributed linearly separable computation problem, which is a generalization of many existing distributed computing problems such as distributed gradient coding and distributed linear transform. A master asks N distributed workers to compute a linearly separable function of K datasets, which is a set of Kc linear combinations of K equal-length messages (each message is a function of one dataset). We assign some datasets to each worker in an uncoded manner, who then computes the corresponding messages and returns some function of these messages, such that from the answers of any Nr out of N workers the master can recover the task function with high probability. In the literature, the specific case where Kc = 1 or where the computation cost is minimum has been considered. In this paper, we focus on the general case (i.e., general Kc and general computation cost) and aim to find the minimum communication cost. We first propose a novel converse bound on the communication cost under the constraint of the popular cyclic assignment (widely considered in the literature), which assigns the datasets to the workers in a cyclic way. Motivated by the observation that existing strategies for distributed computing fall short of achieving the converse bound, we propose a novel distributed computing scheme for some system parameters. The proposed computing scheme is optimal for any assignment when Kc is large and is optimal under the cyclic assignment when the numbers of workers and datasets are equal or Kc is small. In addition, it is order optimal within a factor of 2 under the cyclic assignment for the remaining cases.more » « less
-
This paper considers cache-aided device-to-device (D2D) networks where a trusted server helps to preserve the privacy of the users’ demands. Specifically, the trusted server collects the users’ demands before the delivery phase and sends a query to each user, who then broadcasts multicast packets according to this query. Recently the Authors proposed a D2D private caching scheme that was shown to be order optimal except for the very low memory size regime, where the optimality was proved by comparing to a converse bound without privacy constraint. The main contribution of this paper is a novel converse bound for the studied model where users may collude (i.e., some users share cache contents and demanded files, and yet cannot infer what files the remaining users have demanded) and under the placement phase is uncoded. To the best of the Author’s knowledge, such a general bound is the first that genuinely accounts for the demand privacy constraint. The novel converse bound not only allows to show that the known achievable scheme is order optimal in all cache size regimes (while the existing converse bounds cannot show it), but also has the potential to be used in other variants of demand private caching.more » « less
An official website of the United States government

