Multiple visions of 6G networks elicit Artificial Intelligence (AI) as a central, native element. When 6G systems are deployed at a large scale, end-to-end AI-based solutions will necessarily have to encompass both the radio and the fiberoptical domain. This paper introduces the Decentralized Multi- Party, Multi-Network AI (DMMAI) framework for integrating AI into 6G networks deployed at scale. DMMAI harmonizes AI-driven controls across diverse network platforms and thus facilitates networks that autonomously configure, monitor, and repair themselves. This is particularly crucial at the network edge, where advanced applications meet heightened functionality and security demands. The radio/optical integration is vital due to the current compartmentalization of AI research within these domains, which lacks a comprehensive understanding of their interaction. Our approach explores multi-network orchestration and AI control integration, filling a critical gap in standardized frameworks for AI-driven coordination in 6G networks. The DMMAI framework is a step towards a global standard for AI in 6G, aiming to establish reference use cases, data and model management methods, and benchmarking platforms for future AI/ML solutions.
more »
« less
On Softwarization of Intelligence in 6G Networks for Ultra-Fast Optimal Policy Selection: Challenges and Opportunities
The emerging Sixth Generation (6G) communication networks promising 100 to 1000 Gb/s rates and ultra-low latency (1 millisecond) are anticipated to have native, embedded Artificial Intelligence (AI) capability to support a myriad of services, such as Holographic Type Communications (HTC), tactile Internet, remote surgery, etc. However, these services require ultra-reliability, which is highly impacted by the dynamically changing environment of 6G heterogeneous tiny cells, whereby static AI solutions fitting all scenarios and devices are impractical. Hence, this article introduces a novel concept called the softwarization of intelligence in 6G networks to select the most ideal, ultra-fast optimal policy based on the highly varying channel conditions, traffic demand, user mobility, and so forth. Our envisioned concept is exemplified in a Multi-Armed Bandit (MAB) framework and evaluated within a use case of two simultaneous scenarios (i.e., Neighbor Discovery and Selection (NDS) in a Device-to-Device (D2D) network and aerial gateway selection in an Unmanned Aerial Vehicle (UAV)-based under-served area network). Furthermore, our concept is evaluated through extensive computer-based simulations that indicate encouraging performance. Finally, related challenges and future directions are highlighted.
more »
« less
- Award ID(s):
- 2210252
- PAR ID:
- 10515874
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Network
- Volume:
- 37
- Issue:
- 2
- ISSN:
- 0890-8044
- Page Range / eLocation ID:
- 190 to 197
- Subject(s) / Keyword(s):
- 6G mobile communication Artificial intelligence Optimization Computational modeling Vehicle dynamics Device-to-device communication Data models
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Openness and intelligence are two enabling features to be introduced in next generation wireless networks, for example, Beyond 5G and 6G, to support service heterogeneity, open hardware, optimal resource utilization, and on-demand service deployment. The open radio access network (O-RAN) is a promising RAN architecture to achieve both openness and intelligence through virtualized network elements and well-defined interfaces. While deploying artificial intelligence (AI) models is becoming easier in O-RAN, one significant challenge that has been long neglected is the comprehensive testing of their performance in realistic environments. This article presents a general automated, distributed and AI-enabled testing framework to test AI models deployed in O-RAN in terms of their decision-making performance, vulnerability and security. This framework adopts a master-actor architecture to manage a number of end devices for distributed testing. More importantly, it leverages AI to automatically and intelligently explore the decision space of AI models in O-RAN. Both software simulation testing and software-defined radio hardware testing are supported, enabling rapid proof of concept research and experimental research on wireless research platforms.more » « less
-
null (Ed.)With the advances in wireless communications towards beyond 5G (B5G) and 6G networks, new signal processing and resource management methods need to be explored to overcome the channel impairments and other radio and computing obstacles. In contrast to the conventional methods which are based on classic digital communications structures, B5G and 6G will leverage artificial intelligence (AI) to configure or adapt the radios and networks to the operational context. This requires the ability to reformulate legacy transceiver structures and drive research, development and standardization that can leverage the amount of data that is available and that can be processed with the available computing technology. This paper describes this vision and discusses successful research that justifies it as well as the remaining challenges. We numerically analyze some of the tradeoffs when replacing the physical layer receiver processing with an artificial neural network (ANN).more » « less
-
As we progress from 5G to emerging 6G wireless, the spectrum of cellular communication services is set to broaden significantly, encompassing real-time remote healthcare applications and sophisticated smart infrastructure solutions, among others. This expansion brings to the forefront a diverse set of service requirements, underscoring the challenges and complexities inherent in next-generation networks. In the realm of 5G, Enhanced Mobile Broadband (eMBB) and Ultra-Reliable Low-Latency Communications (URLLC) have been pivotal service categories. As we venture into the 6G era, these foundational use cases will evolve and embody additional performance criteria, further diversifying the network service portfolio. This evolution amplifies the necessity for dynamic and efficient resource allocation strategies capable of balancing the diverse service demands. In response to this need, we introduce the Intelligent Dynamic Resource Allocation and Puncturing (IDRAP) framework. Leveraging Deep Reinforcement Learning (DRL), IDRAP is designed to balance between the bandwidth-intensive requirements of eMBB services and the latency and reliability needs of URLLC users. The performance of IDRAP is evaluated and compared against other resource management solutions, including Intelligent Dynamic Resource Slicing (IDRS), Policy Gradient Actor-Critic Learning (PGACL), System-Wide Tradeoff Scheduling (SWTS), Sum-Log, and Sum-Rate.The results show an improved Service Satisfaction Level (SSL) for eMBB users while maintaining the essential SSL threshold for URLLC services.more » « less
-
Thanks to advancements in wireless networks, robotics, and artificial intelligence, future manufacturing and agriculture processes may be capable of producing more output with lower costs through automation. With ultra fast 5G mmWave wireless networks, data can be transferred to and from servers within a few milliseconds for real-time control loops, while robotics and artificial intelligence can allow robots to work alongside humans in factory and agriculture environments. One important consideration for these applications is whether the “intelligence” that processes data from the environment and decides how to react should be located directly on the robotic device that interacts with the environment - a scenario called “edge computing” - or whether it should be located on more powerful centralized servers that communicate with the robotic device over a network - “cloud computing.” For applications that require a fast response time, such as a robot that is moving and reacting to an agricultural environment in real time, there are two important tradeoffs to consider. On the one hand, the processor on the edge device is likely not as powerful as the cloud server, and may take longer to generate the result. On the other hand, cloud computing requires both the input data and the response to traverse a network, which adds some delay that may cancel out the faster processing time of the cloud server. Even with ultra-fast 5G mmWave wireless links, the frequent blockages that are characteristic of this band can still add delay. To explore this issue, we run a series of experiments on the Chameleon testbed emulating both the edge and cloud scenarios under various conditions, including different types of hardware acceleration at the edge and the cloud, and different types of network configurations between the edge device and the cloud. These experiments will inform future use of these technologies and serve as a jumping off point for further research.more » « less
An official website of the United States government

