skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, November 14 until 2:00 AM ET on Saturday, November 15 due to maintenance. We apologize for the inconvenience.


Title: Finite-length Analysis of D2D Coded Caching via Exploiting Asymmetry in Delivery
We present a novel Packet Type (PT)-based design framework for the finite-length analysis of Device-to-Device (D2D) coded caching. By the exploitation of the asymmetry in the coded delivery phase, two fundamental forms of subpacketization reduction gain for D2D coded caching, i.e., the subfile saving gain and the further splitting saving gain, are identified in the PT framework. The proposed framework features a streamlined design process which uses several key concepts including user grouping, subfile and packet types, multicast group types, transmitter selection, local/global further splitting factor, and PT design as an integer optimization. In particular, based on a predefined user grouping, the subfile and multicast group types can be determined and the cache placement of the users can be correspondingly determined. In this stage, subfiles of certain types can be potentially excluded without being used in the designed caching scheme, which we refer to as subfile saving gain. In the delivery phase, by a careful selection of the transmitters within each type of multicast groups, a smaller number of packets that each subfile needs to be further split into can be achieved, leading to the further splitting saving gain. The joint effect of these two gains results in an overall subpacketization reduction compared to the Ji-Caire-Molisch (JCM) scheme [1]. Using the PT framework, a new class of D2D caching schemes is constructed with order reduction on subpacketization but the same rate when compared to the JCM scheme.  more » « less
Award ID(s):
1824558 1817154
PAR ID:
10388595
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we propose a new design framework on Device-to-Device (D2D) coded caching networks with optimal communication load (rate) but significantly less file subpacketizations compared to that of the well-known D2D coded caching scheme proposed by Ji, Caire and Molisch (JCM). The proposed design framework is referred to as the Packet Type-based (PTB) design, where each file is partitioned into packets according to their pre-defined types while the cache placement and user multicast grouping are based on the packet types. This leads to the so-called raw packet saving gain for the subpacketization levels. By a careful selection of transmitters within each multicasting group, a so-called further splitting ratio gain of the subpacketizatios can also be achieved. By the joint effect of the raw packet saving gain and the further splitting ratio gain, an order-wise subpacketization reduction can be achieved compared to the JCM scheme while preserving the optimal rate. In addition, as the first time presented in the literature according to our knowledge, we find that unequal subpacketizaton is a key to achieve subpacketization reductions when the number of users is odd. As a by-product, instead of directly translating shared link caching schemes to D2D caching schemes, at least for the sake of subpackeitzation, a new design framework is indeed needed. 
    more » « less
  2. In order to preserve the privacy of the users demands from other users, in this paper we formulate a novel information theoretic Device-to-Device (D2D) private caching model by adding a trusted server. In the delivery phase, the trusted server collects the users demands and sends a query to each user, who then broadcasts packets according to this query. Two D2D private caching schemes (uncoded and coded) are proposed in this paper, which are shown to be order optimal. 
    more » « less
  3. This paper considers cache-aided device-to-device (D2D) networks where a trusted server helps to preserve the privacy of the users’ demands. Specifically, the trusted server collects the users’ demands before the delivery phase and sends a query to each user, who then broadcasts multicast packets according to this query. Recently the Authors proposed a D2D private caching scheme that was shown to be order optimal except for the very low memory size regime, where the optimality was proved by comparing to a converse bound without privacy constraint. The main contribution of this paper is a novel converse bound for the studied model where users may collude (i.e., some users share cache contents and demanded files, and yet cannot infer what files the remaining users have demanded) and under the placement phase is uncoded. To the best of the Author’s knowledge, such a general bound is the first that genuinely accounts for the demand privacy constraint. The novel converse bound not only allows to show that the known achievable scheme is order optimal in all cache size regimes (while the existing converse bounds cannot show it), but also has the potential to be used in other variants of demand private caching. 
    more » « less
  4. Coded multicasting has been shown to be a promising approach to significantly improve the performance of content delivery networks with multiple caches downstream of a common multicast link. However, the schemes that have been shown to achieve order-optimal performance require content items to be partitioned into several packets that grows exponentially with the number of caches, leading to codes of exponential complexity that jeopardize their promising performance benefits. In this paper, we address this crucial performance-complexity tradeoff in a heterogeneous caching network setting, where edge caches with possibly different storage capacity collect multiple content requests that may follow distinct demand distributions. We extend the asymptotic (in the number of packets per file) analysis of shared link caching networks to heterogeneous network settings, and present novel coded multicast schemes, based on local graph coloring, that exhibit polynomial-time complexity in all the system parameters, while preserving the asymptotically proven multiplicative caching gain even for finite file packetization. We further demonstrate that the packetization order (the number of packets each file is split into) can be traded-off with the number of requests collected by each cache, while preserving the same multiplicative caching gain. Simulation results confirm the superiority of the proposed schemes and illustrate the interesting request aggregation vs. packetization order tradeoff within several practical settings. Our results provide a compelling step towards the practical achievability of the promising multiplicative caching gain in next generation access networks. 
    more » « less
  5. null (Ed.)
    Coded Caching, proposed by Maddah-Ali and Niesen (MAN), has the potential to reduce network traffic by pre-storing content in the users’ local memories when the network is underutilized and transmitting coded multicast messages that simultaneously benefit many users at once during peak-hour times. This paper considers the linear function retrieval version of the original coded caching setting, where users are interested in retrieving a number of linear combinations of the data points stored at the server, as opposed to a single file. This extends the scope of the authors’ past work that only considered the class of linear functions that operate element-wise over the files. On observing that the existing cache-aided scalar linear function retrieval scheme does not work in the proposed setting, this paper designs a novel coded caching scheme that outperforms uncoded caching schemes that either use unicast transmissions or let each user recover all files in the library. 
    more » « less