skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Cache-Aided Two-User Broadcast Channels with State Information at Receivers
A two-user coded caching problem is studied in a joint source-channel coding framework. A source generates symbols at a certain rate for each file in the database, and a fixed fraction of the symbols are cached at each user. The delivery phase of coded caching takes place over a time-varying erasure broadcast channel, where the channel state information is only available at the receivers. The maximum source rate to keep up with the ergodic rate of both users is characterized.  more » « less
Award ID(s):
1749981
PAR ID:
10164735
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2019 IEEE International Symposium on Information Theory (ISIT)
Page Range / eLocation ID:
16 to 20
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The most commonly used setting in the coded caching literature consists of the following four elements: (i) homogeneous file sizes, (ii) homogeneous cache sizes, (iii) user-independent homogeneous file popularity (i.e., all users share the same file preference), and (iv) worst-case rate analysis. While recent results have relaxed some of these assumptions, deeper understanding of the full heterogeneity setting is still much needed since traditional caching schemes place little assumptions on file/cache sizes and almost always allow each user to have his/her own file preference through individualized file request prediction. Taking a microscopic approach, this paper characterizes the exact capacity of the smallest 2-user/2-file (N = K = 2) problem but under the most general setting that simultaneously allows for (i) heterogeneous files sizes, (ii) heterogeneous cache sizes, (iii) user-dependent file popularity, and (iv) average-rate analysis. Solving completely the case of N = K = 2 could shed further insights on the performance and complexity of optimal coded caching with full heterogeneity for arbitrary N and K. 
    more » « less
  2. In this paper, we propose a new design framework on Device-to-Device (D2D) coded caching networks with optimal communication load (rate) but significantly less file subpacketizations compared to that of the well-known D2D coded caching scheme proposed by Ji, Caire and Molisch (JCM). The proposed design framework is referred to as the Packet Type-based (PTB) design, where each file is partitioned into packets according to their pre-defined types while the cache placement and user multicast grouping are based on the packet types. This leads to the so-called raw packet saving gain for the subpacketization levels. By a careful selection of transmitters within each multicasting group, a so-called further splitting ratio gain of the subpacketizatios can also be achieved. By the joint effect of the raw packet saving gain and the further splitting ratio gain, an order-wise subpacketization reduction can be achieved compared to the JCM scheme while preserving the optimal rate. In addition, as the first time presented in the literature according to our knowledge, we find that unequal subpacketizaton is a key to achieve subpacketization reductions when the number of users is odd. As a by-product, instead of directly translating shared link caching schemes to D2D caching schemes, at least for the sake of subpackeitzation, a new design framework is indeed needed. 
    more » « less
  3. null (Ed.)
    Coded Caching, proposed by Maddah-Ali and Niesen (MAN), has the potential to reduce network traffic by pre-storing content in the users’ local memories when the network is underutilized and transmitting coded multicast messages that simultaneously benefit many users at once during peak-hour times. This paper considers the linear function retrieval version of the original coded caching setting, where users are interested in retrieving a number of linear combinations of the data points stored at the server, as opposed to a single file. This extends the scope of the authors’ past work that only considered the class of linear functions that operate element-wise over the files. On observing that the existing cache-aided scalar linear function retrieval scheme does not work in the proposed setting, this paper designs a novel coded caching scheme that outperforms uncoded caching schemes that either use unicast transmissions or let each user recover all files in the library. 
    more » « less
  4. We present a novel Packet Type (PT)-based design framework for the finite-length analysis of Device-to-Device (D2D) coded caching. By the exploitation of the asymmetry in the coded delivery phase, two fundamental forms of subpacketization reduction gain for D2D coded caching, i.e., the subfile saving gain and the further splitting saving gain, are identified in the PT framework. The proposed framework features a streamlined design process which uses several key concepts including user grouping, subfile and packet types, multicast group types, transmitter selection, local/global further splitting factor, and PT design as an integer optimization. In particular, based on a predefined user grouping, the subfile and multicast group types can be determined and the cache placement of the users can be correspondingly determined. In this stage, subfiles of certain types can be potentially excluded without being used in the designed caching scheme, which we refer to as subfile saving gain. In the delivery phase, by a careful selection of the transmitters within each type of multicast groups, a smaller number of packets that each subfile needs to be further split into can be achieved, leading to the further splitting saving gain. The joint effect of these two gains results in an overall subpacketization reduction compared to the Ji-Caire-Molisch (JCM) scheme [1]. Using the PT framework, a new class of D2D caching schemes is constructed with order reduction on subpacketization but the same rate when compared to the JCM scheme. 
    more » « less
  5. Coded caching is a technique for reducing congestion in communication networks by prefetching content during idle periods and exploiting multicasting opportunities during periods of heavy traffic. Most of the existing research in this area has focused on minimizing the worst case (i.e., peak) rate in a broadcast link with multiple identically distributed user requests. However, modern content delivery networks are investing very heavily in profiling their users and predicting their preferences. The minimal achievable rate of a coded caching scheme with heterogeneous user profiles is still unknown in general. This paper presents the first steps towards solving that problem by analyzing the case of two users with distinct but overlapping demand sets. Specifically, it provides a complete characterization of the uniform-average-rate capacity when the sets overlap in just one file and shows that such capacity can be achieved with selfish and uncoded prefetching. Then, it characterizes the same capacity under selfish and uncoded prefetching when the demand sets overlap in two or more files. The paper also provides explicit prefetching schemes that achieve those capacities. All our results allow for arbitrary (and not necessarily identical) users’ cache sizes and number of files in each demand set. 
    more » « less