Multiple visions of 6G networks elicit Artificial Intelligence (AI) as a central, native element. When 6G systems are deployed at a large scale, end-to-end AI-based solutions will necessarily have to encompass both the radio and the fiberoptical domain. This paper introduces the Decentralized Multi- Party, Multi-Network AI (DMMAI) framework for integrating AI into 6G networks deployed at scale. DMMAI harmonizes AI-driven controls across diverse network platforms and thus facilitates networks that autonomously configure, monitor, and repair themselves. This is particularly crucial at the network edge, where advanced applications meet heightened functionality and security demands. The radio/optical integration is vital due to the current compartmentalization of AI research within these domains, which lacks a comprehensive understanding of their interaction. Our approach explores multi-network orchestration and AI control integration, filling a critical gap in standardized frameworks for AI-driven coordination in 6G networks. The DMMAI framework is a step towards a global standard for AI in 6G, aiming to establish reference use cases, data and model management methods, and benchmarking platforms for future AI/ML solutions.
more »
« less
Experimenting in a Global Multi-Domain Testbed
Upcoming AI-based and 5G applications are demanding new network management approaches that are capable to cope with unprecedented levels of flexibility, scalability and energy efficiency. In order to make these use cases tangible and feasible, network management solutions aim to rely on multi-domain, multi-tier architectures that permit complex end-to-end orchestration of network resources. However, current research on scheduling functions and task-offloading algorithms often focus on one single-domain, and the exploration of large-scale inter-operable solutions becomes a challenge. Fortunately for the networking research community, a number of available testing facilities deployed at different geographical location along the world can be integrated to be used as a single joint multi-domain infrastructure. In this demo paper, we present a hands-off experience of how to integrate different high-performance testbeds, located in USA, Belgium and The Netherlands, in order to enable multi-domain large-scale experimentation. We demonstrate end-to-end performance characteristics of the testbed integration and we describe the main takeaways and lessons learned to drive researchers towards successful deployments in such end-to-end global infrastructure.
more »
« less
- Award ID(s):
- 1743313
- PAR ID:
- 10314446
- Date Published:
- Journal Name:
- IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
To interconnect research facilities across wide geographic areas, network operators deploy science networks, also referred to as Research and Education (R&E) networks. These networks allow experimenters to establish dedicated circuits between research facilities for transferring large amounts of data, by using advanced reservation systems. Intercontinental dedicated circuits typically require coordination between multiple administrative domains, which need to reach an agreement on a suitable advance reservation. The success rate of finding an advance reservation decreases as the number of participant domains increases for traditional systems because the circuit is composed over a single path. To improve provisioning of multi-domain advance reservations, we propose an architecture for end-to-end service orchestration in multi-domain science networks that leverages software-defined exchanges (SDX) for providing multi-path, multi-domain advance reservations. We have implemented an orchestrator for multi-path, multi-domain advance reservations and an SDX to support these services. Our orchestration architecture enables multi-path, multi-domain advance reservations and improves the reservation success rate from 50% in single path systems to 99% when four path are available.more » « less
-
The rapid growth in technology and wide use of internet has increased smart applications such as intelligent transportation control system, and Internet of Things, which heavily rely on an efficient and reliable connectivity network. To overcome high bandwidth work load on the network, as well as minimize latency for real-time applications, the computation can be moved from the central cloud to a distributed edge cloud. The edge computing benefits various smart applications that uses distributed network for data analytics and services. Different from the existing cloud management solutions, edge computing needs to move cloud management services towards distributed heterogeneous edge nodes for multi-tenant user applications. However, existing cloud management services do not offer remote deployment of multi-tenant user applications on the cloud of edge nodes. In this paper, we propose a practical edge cloud software framework for deploying multi-tenant distributed smart applications. Having multiple distributed end nodes, auto discovery of all active end nodes is required for deploying multi-tenant user applications. However, existing cloud solutions require either private network or fixed IP address, which is not achievable for the distributed edge nodes. Most of the edge nodes connected through the public internet without fixed IP, and some of them even connect through IEEE 802.15 based sensor networks. We propose to build a software platform to manage the distributed edge nodes as well as support services to deploy and launch isolated, multi-tenant user applications through a lightweight container. We propose an architectural solution to remotely access edge cloud management services through intermittent internet connections. We open sourced our whole set of software solutions, and analyzed the major performance metrics of the edge cloud platform.more » « less
-
n recent years, we have seen the success of network representation learning (NRL) methods in diverse domains ranging from com- putational chemistry to drug discovery and from social network analysis to bioinformatics algorithms. However, each such NRL method is typically prototyped in a programming environment familiar to the developer. Moreover, such methods rarely scale out to large-scale networks or graphs. Such restrictions are problematic to domain scientists or end-users who want to scale a particular NRL method-of-interest on large graphs from their specific domain. In this work, we present a novel system, WebMILE to democ- ratize this process. WebMILE can scale an unsupervised network embedding method written in the user’s preferred programming language on large graphs. It provides an easy-to-use Graphical User Interface (GUI) for the end-user. The user provides the necessary in- put (embedding method file, graph, required packages information) through a simple GUI, and WebMILE executes the input network embedding method on the given input graph. WebMILE leverages a pioneering multi-level method, MILE (alternatively DistMILE if the user has access to a cluster), that can scale a network embed- ding method on large graphs. The language agnosticity is achieved through a simple Docker interface. In this demonstration, we will showcase how a domain scientist or end-user can utilize WebMILE to rapidly prototype and learn node embeddings of a large graph in a flexible and efficient manner - ensuring the twin goals of high productivity and high performance.more » « less
-
With the proliferation of data movement across the Internet, global data traffic per year has already exceeded the Zettabyte scale. The network infrastructure and end-systems facilitating the vast data movement consume an extensive amount of electricity, measured in terawatt-hours per year. This massive energy footprint costs the world economy billions of dollars partially due to energy consumed at the network end-systems. Although extensive research has been done on managing power consumption within the core networking infrastructure, there is little research on reducing the power consumption at the end-systems during active data transfers. This paper presents a novel cross-layer optimization framework, called Cross-LayerHLA, to minimize energy consumption at the end-systems by applying machine learning techniques to historical transfer logs and extracting the hidden relationships between different parameters affecting both the performance and resource utilization. It utilizes offline analysis to improve online learning and dynamic tuning of application-level and kernel-level parameters with minimal overhead. This approach minimizes end-system energy consumption and maximizes data transfer throughput. Our experimental results show that Cross-LayerHLA outperforms other state-of-the-art solutions in this area.more » « less
An official website of the United States government

