skip to main content


Title: Joint Caching and Routing in Congestible Networks of Arbitrary Topology
In-network caching constitutes a promising approach to reduce traffic loads and alleviate congestion in both wired and wireless networks. In this paper, we study the joint caching and routing problem in congestible networks of arbitrary topology (JoCRAT) as a generalization of previous efforts in this particular field. We show that JoCRAT extends many previous problems in the caching literature that are intractable even with specific topologies and/or assumed unlimited bandwidth of communications. To handle this significant but challenging problem, we develop a novel approximation algorithm with guaranteed performance bound based on a randomized rounding technique. Evaluation results demonstrate that our proposed algorithm achieves nearoptimal performance over a broad array of synthetic and real networks, while significantly outperforming the state-of-the-art methods.  more » « less
Award ID(s):
1815676
NSF-PAR ID:
10121140
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE internet of things journal
ISSN:
2327-4662
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Internet of drones (IoD), employing drones as the internet of things (IoT) devices, brings flexibility to IoT networks and has been used to provision several applications (e.g., object tracking and traffic surveillance). The explosive growth of users and IoD applications injects massive traffic into IoD networks, hence causing congestions and reducing the quality of service (QoS). In order to improve the QoS, caching at IoD gateways is a promising solution which stores popular IoD data and sends them directly to the users instead of activating drones to transmit the data; this reduces the traffic in IoD networks. In order to fully utilize the storage-limited caches, appropriate content placement decisions should be made to determine which data should be cached. On the other hand, appropriate drone association strategies, which determine the serving IoD gateway for each drone, help distribute the network traffic properly and hence improve the QoS. In our work, we consider a joint optimization of drone association and content placement problem aimed at maximizing the average data transfer rate. This problem is formulated as an integer linear programming (ILP) problem. We then design the Drone Association and Content Placement (DACP) algorithm to solve this problem with low computational complexity. Extensive simulations demonstrate the performance of DACP. 
    more » « less
  2. null (Ed.)
    Crucial performance metrics of a caching algorithm include its ability to quickly and accurately learn a popularity distribution of requests. However, a majority of work on analytical performance analysis focuses on hit probability after an asymptotically large time has elapsed. We consider an online learning viewpoint, and characterize the ``regret'' in terms of the finite time difference between the hits achieved by a candidate caching algorithm with respect to a genie-aided scheme that places the most popular items in the cache. We first consider the Full Observation regime wherein all requests are seen by the cache. We show that the Least Frequently Used (LFU) algorithm is able to achieve order optimal regret, which is matched by an efficient counting algorithm design that we call LFU-Lite. We then consider the Partial Observation regime wherein only requests for items currently cached are seen by the cache, making it similar to an online learning problem related to the multi-armed bandit problem. We show how approaching this ``caching bandit'' using traditional approaches yields either high complexity or regret, but a simple algorithm design that exploits the structure of the distribution can ensure order optimal regret. We conclude by illustrating our insights using numerical simulations. 
    more » « less
  3. Many applications deployed to public clouds are concerned about the confidentiality of their outsourced data, such as financial services and electronic patient records. A plausible solution to this problem is homomorphic encryption (HE), which supports certain algebraic operations directly over the ciphertexts. The downside of HE schemes is their significant, if not prohibitive, performance overhead for data-intensive workloads that are very common for outsourced databases, or database-as-a-serve in cloud computing. The objective of this work is to mitigate the performance overhead incurred by the HE module in outsourced databases. To that end, this paper proposes a radix-based parallel caching optimization for accelerating the performance of homomorphic encryption (HE) of outsourced databases in cloud computing. The key insight of the proposed optimization is caching selected radix-ciphertexts in parallel without violating existing security guarantees of the primitive/base HE scheme. We design the radix HE algorithm and apply it to both batch- and incremental-HE schemes; we demonstrate the security of those radix-based HE schemes by showing that the problem of breaking them can be reduced to the problem of breaking their base HE schemes that are known IND-CPA (i.e. Indistinguishability under Chosen-Plaintext Attack). We implement the radix-based schemes as middleware of a 10-node Cassandra cluster on CloudLab; experiments on six workloads show that the proposed caching can boost state-of-the-art HE schemes, such as Paillier and Symmetria, by up to five orders of magnitude.

     
    more » « less
  4. Cache-aided wireless device-to-device (D2D) networks allow significant throughput increase, depending on the concentration of the popularity distribution of files. Many studies assume that all users have the same preference distribution; however, this may not be true in practice. This work investigates whether and how the information about individual preferences can benefit cache-aided D2D networks. We examine a clustered network and derive a network utility that considers both the user distribution and channel fading effects into the analysis. We also formulate a utility maximization problem for designing caching policies. This maximization problem can be applied to optimize several important quantities, including throughput, energy efficiency (EE), cost, and hit-rate, and to solve different tradeoff problems. We provide a general approach that can solve the proposed problem under the assumption that users coordinate, then prove that the proposed approach can obtain the stationary point under a mild assumption. Using simulations of practical setups, we show that performance can improve significantly with proper exploitation of individual preferences. We also show that different types of tradeoffs exist between different performance metrics and that they can be managed through caching policy and cooperation distance designs. 
    more » « less
  5. Content delivery networks (CDNs) cache and serve a majority of the user-requested content on the Internet. Designing caching algorithms that automatically adapt to the heterogeneity, burstiness, and non-stationary nature of real-world content requests is a major challenge and is the focus of our work. While there is much work on caching algorithms for stationary request traffic, the work on non-stationary request traffic is very limited. Consequently, most prior models are inaccurate for non-stationary production CDN traffic. We propose two TTL-based caching algorithms that provide provable performance guarantees for request traffic that is bursty and nonstationary. The first algorithm called d-TTL dynamically adapts a TTL parameter using stochastic approximation. Given a feasible target hit rate, we show that d-TTL converges to its target value for a general class of bursty traffic that allows Markov dependence over time and non-stationary arrivals. The second algorithm called f-TTL uses two caches, each with its own TTL. The first-level cache adaptively filters out non-stationary traffic, while the second-level cache stores frequently-accessed stationary traffic. Given feasible targets for both the hit rate and the expected cache size, f-TTL asymptotically achieves both targets. We evaluate both d-TTL and f-TTL using an extensive trace containing more than 500 million requests from a production CDN server. We show that both d-TTL and f-TTL converge to their hit rate targets with an error of about 1.3%. But, f-TTL requires a significantly smaller cache size than d-TTL to achieve the same hit rate, since it effectively filters out non-stationary content. 
    more » « less