The utilization of newer spectrum bands such as in 5G and 6G networks, has the potential to inadvertently cause interference to passive sensing applications operating in the adjacent portions of spectrum. One such application that has received a lot of attention has been passive weather sensing where leakage from 5G mmWave band transmissions in the 26 GHz spectrum could potentially impact the observations of passive sensors on weather prediction satellites. To mitigate problems such as the above, we present a design framework that can be employed in mmWave networks by using filtennas (or filtering antennas) at the transmitter along with integrated resource allocation to minimize leakage into adjacent channels. Specifically, we propose an Iterative Leakage Aware Water Filling solution to allocate power and bandwidth in a system employing filtennas that guarantees performance requirements while reducing the leakage. In addition, a key contribution of this work is the characterization of the leakage function based on the order of filtennas which is incorporated in our resource allocation framework.
more »
« less
Adaptive Risk-Aware Resource Orchestration for 5G Microservices over Multi-Tier Edge-Cloud Systems
Modern fifth-generation (5G) networks are increasingly moving towards architectures characterized by softwarization and virtualization. This paper addresses the complexities and challenges in deploying applications and services in the emerging multi-tiered 5G network architecture, particularly in the context of microservices-based applications. These applications, characterized by their structure as directed graphs of interdependent functions, are sensitive to the deployment tiers and resource allocation strategies, which can result in performance degradation and susceptibility to failures. Additionally, the threat of deploying potentially malicious applications exacerbates resource allocation inefficiencies. To address these issues, we propose a novel optimization framework that incorporates a probabilistic approach for assessing the risk of malicious applications, leading to a more resilient resource allocation strategy. Our framework dynamically optimizes both computational and networking resources across various tiers, aiming to enhance key performance metrics such as latency, accuracy, and resource utilization. Through detailed simulations, we demonstrate that our framework not only satisfies strict performance requirements but also surpasses existing methods in efficiency and security.
more »
« less
- Award ID(s):
- 2226232
- PAR ID:
- 10628435
- Publisher / Repository:
- IEEE
- Date Published:
- ISSN:
- 2694-2941
- ISBN:
- 979-8-3503-0405-3
- Page Range / eLocation ID:
- 359 to 364
- Format(s):
- Medium: X
- Location:
- Denver, CO, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
As 5G networks become part of the critical infrastructures whose dysfunctions can cause severe damages to society, their security has been increasingly scrutinized. Recent works have revealed multiple specification-level flaws in 5G core networks but there are no easy solutions to patch the vulnerabilities in practice. Against this backdrop, this work proposes a unified framework called PROV5GC to detect and attribute various attacks that exploit these vulnerabilities in real-world 5G networks. PROV5GC tackles three technical challenges faced when deploying existing intrusion detection system (IDS) frameworks to protect 5G core networks, namely, message encryption, partial observability, and identity ephemerality. The key idea of PROV5GC is to use provenance graphs, which are constructed from the communication messages logged by various 5G core network functions. Based on these graphs, PROV5GC infers the original call flows to identify those with malicious intentions. We demonstrate how PROV5GC can be used to detect three different kinds of attacks, which aim to compromise the confidentiality, integrity, and/or availability of 5G core networks. We build a prototype of PROV5GC and evaluate its execution performance on commodity cluster servers. We observe that due to stateless instrumentation, the logging overhead incurred to each network function is low. We also show that PROV5GC can be used to detect the three 5G-specific attacks with high accuracy.more » « less
-
5G and beyond communication networks require satisfying very low latency standards, high reliability, high- speed user connectivity, more security, improved capacity and better service demands. Augmenting such a wide range of KPIs (Key Performance Indicators) needs a smart, intelligent and programmable solution for TSPs (Telecommunication Service Providers). Resource availability and quality sustainability are challenging parameters in a heterogeneous 5G environment. Programmable Dynamic Network Slicing (PDNS) is a key technology enabling parameter that can allow multiple tenants to bring their versatile applications simultaneously over shared physical infrastructure. Latest emerging technologies like virtualized Software- Defined Networks (vSDN) and Artificial Intelligence (AI) play a pivotal supporting role in solving the above-mentioned constraints. Using the PDNS framework, we have proposed a novel slice backup algorithm leveraging Deep Learning (DL) neural network to orchestrate network latency and load efficiently. Our model has been trained using the available KPIs and incoming traffic is analyzed. The proposed solution performs stable load balancing between shared slices even if certain extreme conditions (slice unavailability) through intelligent resource allocation. The framework withstands service outage and always select the most suitable slice as a backup. Our results show latency-aware resource distribution for better network stability.more » « less
-
As we progress from 5G to emerging 6G wireless, the spectrum of cellular communication services is set to broaden significantly, encompassing real-time remote healthcare applications and sophisticated smart infrastructure solutions, among others. This expansion brings to the forefront a diverse set of service requirements, underscoring the challenges and complexities inherent in next-generation networks. In the realm of 5G, Enhanced Mobile Broadband (eMBB) and Ultra-Reliable Low-Latency Communications (URLLC) have been pivotal service categories. As we venture into the 6G era, these foundational use cases will evolve and embody additional performance criteria, further diversifying the network service portfolio. This evolution amplifies the necessity for dynamic and efficient resource allocation strategies capable of balancing the diverse service demands. In response to this need, we introduce the Intelligent Dynamic Resource Allocation and Puncturing (IDRAP) framework. Leveraging Deep Reinforcement Learning (DRL), IDRAP is designed to balance between the bandwidth-intensive requirements of eMBB services and the latency and reliability needs of URLLC users. The performance of IDRAP is evaluated and compared against other resource management solutions, including Intelligent Dynamic Resource Slicing (IDRS), Policy Gradient Actor-Critic Learning (PGACL), System-Wide Tradeoff Scheduling (SWTS), Sum-Log, and Sum-Rate.The results show an improved Service Satisfaction Level (SSL) for eMBB users while maintaining the essential SSL threshold for URLLC services.more » « less
-
Edge Cloud (EC) is poised to brace massive machine type communication (mMTC) for 5G and IoT by providing compute and network resources at the edge. Yet, the EC being regionally domestic with a smaller scale, faces the challenges of bandwidth and computational throughput. Resource management techniques are considered necessary to achieve efficient resource allocation objectives. Software Defined Network (SDN) enabled EC architecture is emerging as a potential solution that enables dynamic bandwidth allocation and task scheduling for latency sensitive and diverse mobile applications in the EC environment. This study proposes a novel Heuristic Reinforcement Learning (HRL) based flowlevel dynamic bandwidth allocation framework and validates it through end-to-end implementation using OpenFlow meter feature. OpenFlow meter provides granular control and allows demand-based flow management to meet the diverse QoS requirements germane to IoT traffics. The proposed framework is then evaluated by emulating an EC scenario based on real NSF COSMOS testbed topology at The City College of New York. A specific heuristic reinforcement learning with linear-annealing technique and a pruning principle are proposed and compared with the baseline approach. Our proposed strategy performs consistently in both Mininet and hardware OpenFlow switches based environments. The performance evaluation considers key metrics associated with real-time applications: throughput, end-to-end delay, packet loss rate, and overall system cost for bandwidth allocation. Furthermore, our proposed linear annealing method achieves faster convergence rate and better reward in terms of system cost, and the proposed pruning principle remarkably reduces control traffic in the network.more » « less
An official website of the United States government

