skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Flow-level Dynamic Bandwidth Allocation in SDN-enabled Edge Cloud using Heuristic Reinforcement Learning
Edge Cloud (EC) is poised to brace massive machine type communication (mMTC) for 5G and IoT by providing compute and network resources at the edge. Yet, the EC being regionally domestic with a smaller scale, faces the challenges of bandwidth and computational throughput. Resource management techniques are considered necessary to achieve efficient resource allocation objectives. Software Defined Network (SDN) enabled EC architecture is emerging as a potential solution that enables dynamic bandwidth allocation and task scheduling for latency sensitive and diverse mobile applications in the EC environment. This study proposes a novel Heuristic Reinforcement Learning (HRL) based flowlevel dynamic bandwidth allocation framework and validates it through end-to-end implementation using OpenFlow meter feature. OpenFlow meter provides granular control and allows demand-based flow management to meet the diverse QoS requirements germane to IoT traffics. The proposed framework is then evaluated by emulating an EC scenario based on real NSF COSMOS testbed topology at The City College of New York. A specific heuristic reinforcement learning with linear-annealing technique and a pruning principle are proposed and compared with the baseline approach. Our proposed strategy performs consistently in both Mininet and hardware OpenFlow switches based environments. The performance evaluation considers key metrics associated with real-time applications: throughput, end-to-end delay, packet loss rate, and overall system cost for bandwidth allocation. Furthermore, our proposed linear annealing method achieves faster convergence rate and better reward in terms of system cost, and the proposed pruning principle remarkably reduces control traffic in the network.  more » « less
Award ID(s):
1818884 2029295
PAR ID:
10289812
Author(s) / Creator(s):
Date Published:
Journal Name:
IEEE Conference, on Future Internet of Things and Cloud
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This research project aims to develop a resource management framework for efficient allocation of 5G network resources to IoT (Internet of Things) devices. As 5G technology is increasingly integrated with IoT applications, the diverse demands and use-cases of IoT devices necessitate dynamic resource management. The focus of this study is to develop an IoT device environment utilizing reinforcement learning (RL) for resource adjustment. The environment observes IoT device parameters including the current BER (bit-error-rate), allocated bandwidth, and current signal power levels. Actions that can be taken by the RL agent on the environment include adjustments to the bandwidth and the signal power level of an IoT device. One implementation of the environment is currently tested with PPO (Proximal Policy Optimization), and DDPG (Deep Deterministic Policy Gradient) RL algorithms using a continuous action space. Initial results show that PPO models train at a faster rate, while DDPG models explore a wider range of states, leading to better model predictions. Another version is tested with PPO and DQN (Deep Q-Networks) using a discrete action space. DQN demonstrates slightly better results than the PPO, possibly due to its value-based approach and that it is better suited for discrete action spaces. 
    more » « less
  2. Systems for Internet of Things (IoT) have generated new requirements in all aspects of their development and deployment, including expanded Quality of Service (QoS) needs, enhanced resiliency of computing and connectivity, and the scalability to support massive numbers of end devices in a variety of applications. The research reported here concerns the development of a reliable and secure IoT/cyber physical system (CPS), providing network support for smart and connected communities, to be realized by means of distributed, secure, resilient Edge Cloud (EC) computing. This distributed EC system will be a network of geographically distributed EC nodes, brokering between end-devices and Backend Cloud (BC) servers. This paper focuses on three main aspects of the CPS: a) resource management in mobile cloud computing; b) information management in dynamic distributed databases; and c) biological-inspired intrusion detection system. 
    more » « less
  3. Network slicing enables operators to efficiently support diverse applications on a shared infrastructure. However, the evolving complexity of networks, compounded by inter-cell interference, necessitates agile and adaptable resource management. While deep learning offers solutions for coping with complexity, its adaptability to dynamic configurations remains limited. In this paper, we propose a novel hybrid deep learning algorithm called IDLA (integrated deep learning with the Lagrangian method). This integrated approach aims to enhance the scalability, flexibility, and robustness of slicing resource allocation solutions by harnessing the high approximation capability of deep learning and the strong generalization of classical non-linear optimization methods. Then, we introduce a variational information bottleneck (VIB)-assisted domain adaptation (DA) approach to enhance integrated deep learning and Lagrangian method (IDLA)’s adaptability across diverse network environments and conditions. We propose pre-training a variational information bottleneck (VIB)-based Quality of Service (QoS) estimator, using slice-specific inputs shared across all source domain slices. Each target domain slice can deploy this estimator to predict its QoS and optimize slice resource allocation using the IDLA algorithm. This VIB-based estimator is continuously fine-tuned with a mixture of samples from both the source and target domains until convergence. Evaluating on a multi-cell network with time-varying slice configurations, the VIB-enhanced IDLA algorithm outperforms baselines such as heuristic and deep reinforcement learning-based solutions, achieving twice the convergence speed and 16.52% higher asymptotic performance after slicing configuration changes. Transferability assessment demonstrates a 25.66% improvement in estimation accuracy with VIB, especially in scenarios with significant domain gaps, highlighting its robustness and effectiveness across diverse domains. 
    more » « less
  4. Heterogeneous chiplets have been proposed for accelerating high-performance computing tasks. Integrated inside one package, CPU and GPU chiplets can share a common interconnection network that can be implemented through the interposer. However, CPU and GPU applications have very different traffic patterns in general. Without effective management of the network resource, some chiplets can suffer significant performance degradation because the network bandwidth is taken away by communication-intensive applications. Therefore, techniques need to be developed to effectively manage the shared network resources. In a chiplet-based system, resource management needs to not only react in real-time but also be cost-efficient. In this work, we propose a reconfigurable network architecture, leveraging Kalman Filter to make accurate predictions on network resources needed by the applications and then adaptively change the resource allocation. Using our design, the network bandwidth can be fairly allocated to avoid starvation or performance degradation. Our evaluation results show that the proposed reconfigurable interconnection network can dynamically react to the changes in traffic demand of the chiplets and improve the system performance with low cost and design complexity. 
    more » « less
  5. This research proposes a dynamic resource allocation method for vehicle-to-everything (V2X) communications in the six generation (6G) cellular networks. Cellular V2X (C-V2X) communications empower advanced applications but at the same time bring unprecedented challenges in how to fully utilize the limited physical-layer resources, given the fact that most of the applications require both ultra low latency, high data rate and high reliability. Resource allocation plays a pivotal role to satisfy such requirements as well as guarantee quality of service (QoS). Based on this observation, a novel fuzzy-logic-assisted Q learning model (FAQ) is proposed to intelligently and dynamically allocate resources by taking advantage of the centralized allocation mode. The proposed FAQ model reuses the resources to maximize the network throughput while minimizing the interference caused by concurrent transmissions. The fuzzy-logic module expedites the learning and improves the performance of the Q-learning. A mathematical model is developed to analyze the network throughput considering the interference. To evaluate the performance, a system model for V2X communications is built for urban areas, where various V2X services are deployed in the network. Simulation results show that the proposed FAQ algorithm can significantly outperform deep reinforcement learning, Q-learning and other advanced allocation strategies regarding the convergence speed and the network throughput. 
    more » « less