Evolution of cellular networks into dynamic, dense, and heterogeneous networks have introduced new challenges for cell resource optimization, especially in the imbalanced traffic load regions. Numerous load balancing schemes have been proposed to tackle this issue; however, they operate in a reactive manner that confines their ability to meet the top‐notch quality of experience demands. To address this challenge, we propose a novel proactive load balancing scheme. Our framework learns users' mobility and demands statistics jointly to proactively cache future contents during their stay at lightly loaded cells, which results in quality of experience maximization and load minimization. System level simulations are performed and compared with the state‐of‐the‐art reactive schemes.
- PAR ID:
- 10076439
- Date Published:
- Journal Name:
- IEEE Transactions on Green Communications and Networking
- ISSN:
- 2473-2400
- Page Range / eLocation ID:
- 1 to 1
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Multi-band and multi-tier network densification is being considered as the most promising solution to overcome the capacity crunch problem of cellular networks. In this direction, small cells (SCs) are being deployed within the macro cell (MC) coverage, to off-load some of the users associated with the MCs. This deployment scenario raises several problems. Among others, signalling overhead and mobility management will become critical considerations. Frequent handovers (HOs) in ultra dense SC deployments could lead to a dramatic increase in signalling overhead. This suggests a paradigm shift towards a signalling conscious cellular architecture with smart mobility management. In this regards, the control/data separation architecture (CDSA) with dual connectivity is being considered for the future radio access. Considering the CDSA as the radio access network (RAN) architecture, we quantify the reduction in HO signalling w.r.t. the conventional approach. We develop analytical models which compare the signalling generated during various HO scenarios in the CDSA and conventionally deployed networks. New parameters are introduced which can with optimum value significantly reduce the HO signalling load. The derived model includes HO success and HO failure scenarios along with specific derivations for continuous and non-continuous mobility users. Numerical results show promising CDSA gains in terms of saving in HO signalling overhead.more » « less
-
Due to mainstream adoption of cloud computing and its rapidly increasing usage of energy, the efficient management of cloud computing resources has become an important issue. A key challenge in managing the resources lies in the volatility of their demand. While there have been a wide variety of online algorithms (e.g. Receding Horizon Control, Online Balanced Descent) designed, it is hard for cloud operators to pick the right algorithm. In particular, these algorithms vary greatly on their usage of predictions and performance guarantees. This paper aims at studying an automatic algorithm selection scheme in real time. To do this, we empirically study the prediction errors from real-world cloud computing traces. Results show that prediction errors are distinct from different prediction algorithms, across virtual machines, and over the time horizon. Based on these observations, we propose a simple prediction error model and prove upper bounds on the dynamic regret of several online algorithms. We then apply the empirical and theoretical results to create a simple online meta-algorithm that chooses the best algorithm on the fly. Numerical simulations demonstrate that the performance of the designed policy is close to that of the best algorithm in hindsight.more » « less
-
null (Ed.)Saving energy for latency-critical applications like web search can be challenging because of their strict tail latency constraints. State-of-the-art power management frameworks use Dynamic Voltage and Frequency Scaling (DVFS) and Sleep states techniques to slow down the request processing and finish the search just-in-time. However, accurately predicting the compute demand of a request can be difficult. In this paper, we present Gemini, a novel power management framework for latency- critical search engines. Gemini has two unique features to capture the per query service time variation. First, at light loads without request queuing, a two-step DVFS is used to manage the CPU power. Our two-step DVFS selects the initial CPU frequency based on the query specific service time prediction and then judiciously boosts the initial frequency at the right time to catch-up to the deadline. The determination of boosting time further relies on estimating the error in the prediction of individual query’s service time. At high loads, where there is request queuing, only the current request being executed and the critical request in the queue adopt a two-step DVFS. All the other requests in-between use the same frequency to reduce the frequency transition overhead. Second, we develop two separate neural network models, one for predicting the service time and the other for the error in the prediction. The combination of these two predictors significantly improves the power saving and tail latency results of our two-step DVFS. Gemini is implemented on the Solr search engine. Evaluations on three representative query traces show that Gemini saves 41% of the CPU power, and is better than other state-of-the-art techniques.more » « less
-
Abstract Chinese hamster ovary (CHO) cells are essential to biopharmaceutical manufacturing and production instability, the loss of productivity over time, is a long‐standing challenge in the industry. Accurate prediction of cell line stability could enable efficient screening to identify clones suitable for manufacturing saving significant time and costs. DNA repair genes may offer biomarkers to address this need. In this study, over 40 cell lines representing various host lineages from three companies/organizations were evaluated for expression of five DNA repair genes (
Fam35a ,Lig4 ,Palb2 ,Pari , andXrcc6 ). Expression measured in cells with less than 30 population doubling levels (PDLs) was correlated to stability profiles at 60+ PDL. Principal component analysis identified markers which separate stable and unstable CHO‐DG44 cell lines. Notably, two genes,Lig4 andXrcc6 , showed higher expression in unstable CHO‐DG44 cell lines with copy number loss identified as the mechanism of production instability. Expression levels across all cell ages showed lower DNA repair gene expression was associated with increased cell age. Collectively, DNA repair genes provide critical insight into long‐term behavior of CHO cells and their expression levels have potential to predict cell line stability in certain cases.