skip to main content

Title: Decentralized Control of Distributed Cloud Networks with Generalized Network Flows
Emerging distributed cloud architectures, e.g., fog and mobile edge computing, are playing an increasingly impor-tant role in the efficient delivery of real-time stream-processing applications (also referred to as augmented information services), such as industrial automation and metaverse experiences (e.g., extended reality, immersive gaming). While such applications require processed streams to be shared and simultaneously consumed by multiple users/devices, existing technologies lack efficient mechanisms to deal with their inherent multicast na-ture, leading to unnecessary traffic redundancy and network congestion. In this paper, we establish a unified framework for distributed cloud network control with generalized (mixed-cast) traffic flows that allows optimizing the distributed execution of the required packet processing, forwarding, and replication operations. We first characterize the enlarged multicast network stability region under the new control framework (with respect to its unicast counterpart). We then design a novel queuing system that allows scheduling data packets according to their current destination sets, and leverage Lyapunov drift-plus-penalty con-trol theory to develop the first fully decentralized, throughput-and cost-optimal algorithm for multicast flow control. Numerical experiments validate analytical results and demonstrate the performance gain of the proposed design over existing network control policies.
; ; ;
Award ID(s):
1816699 2148315
Publication Date:
Journal Name:
IEEE Transactions on Communications
Page Range or eLocation-ID:
1 to 1
Sponsoring Org:
National Science Foundation
More Like this
  1. Next-generation distributed computing networks (e.g., edge and fog computing) enable the efficient delivery of delay-sensitive, compute-intensive applications by facilitating access to computation resources in close proximity to end users. Many of these applications (e.g., augmented/virtual reality) are also data-intensive: in addition to user-specific (live) data streams, they require access to shared (static) digital objects (e.g., im-age database) to complete the required processing tasks. When required objects are not available at the servers hosting the associated service functions, they must be fetched from other edge locations, incurring additional communication cost and latency. In such settings, overall service delivery performance shall benefit from jointly optimized decisions around (i) routing paths and processing locations for live data streams, together with (ii) cache selection and distribution paths for associated digital objects. In this paper, we address the problem of dynamic control of data-intensive services over edge cloud networks. We characterize the network stability region and design the first throughput-optimal control policy that coordinates processing and routing decisions for both live and static data-streams. Numerical results demonstrate the superior performance (e.g., throughput, delay, and resource consumption) obtained via the novel multi-pipeline flow control mechanism of the proposed policy, compared with state-of-the-art algorithms that lack integratedmore »stream processing and data distribution control.« less
  2. Distributed cyber-infrastructures and Artificial Intelligence (AI) are transformative technologies that will play a pivotal role in the future of society and the scientific community. Internet of Things (IoT) applications harbor vast quantities of connected devices that collect a massive amount of sensitive information (e.g., medical, financial), which is usually analyzed either at the edge or federated cloud systems via AI/Machine Learning (ML) algorithms to make critical decisions (e.g., diagnosis). It is of paramount importance to ensure the security, privacy, and trustworthiness of data collection, analysis, and decision-making processes. However, system complexity and increased attack surfaces make these applications vulnerable to system breaches, single-point of failures, and various cyber-attacks. Moreover, the advances in quantum computing exacerbate the security and privacy challenges. That is, emerging quantum computers can break conventional cryptographic systems that offer cyber-security services, public key infrastructures, and privacy-enhancing technologies. Therefore, there is a vital need for new cyber-security paradigms that can address the resiliency, long-term security, and efficiency requirements of distributed cyber infrastructures. In this work, we propose a vision of distributed architecture and cyber-security framework that uniquely synergizes secure computation, Physical Quantum Key Distribution (PQKD), NIST PostQuantum Cryptography (PQC) efforts, and AI/ML algorithms to achieve breach-resilient, functional, andmore »efficient cyber-security services. At the heart of our proposal lies a new Multi-Party Computation Quantum Network Core (MPC-QNC) that enables fast and yet quantum-safe execution of distributed computation protocols via integration of PQKD infrastructure and hardware acceleration elements. We showcase the capabilities of MPCQNC by instantiating it for Public Key Infrastructures (PKI) and federated ML in our HDQPKI and TPQ-ML, frameworks, respectively. HDQPKI (to the best of our knowledge) is the first hybrid and distributed post-quantum PKI that harnesses PQKD and NIST PQC standards to offer the highest level of quantum safety with a breach-resiliency against active adversaries. TPQ-ML presents a post-quantum secure and privacy-preserving federated ML infrastructure.« less
  3. Recent advances in machine learning enable wider applications of prediction models in cyber-physical systems. Smart grids are increasingly using distributed sensor settings for distributed sensor fusion and information processing. Load forecasting systems use these sensors to predict future loads to incorporate into dynamic pricing of power and grid maintenance. However, these inference predictors are highly complex and thus vulnerable to adversarial attacks. Moreover, the adversarial attacks are synthetic norm-bounded modifications to a limited number of sensors that can greatly affect the accuracy of the overall predictor. It can be much cheaper and effective to incorporate elements of security and resilience at the earliest stages of design. In this paper, we demonstrate how to analyze the security and resilience of learning-based prediction models in power distribution networks by utilizing a domain-specific deep-learning and testing framework. This framework is developed using DeepForge and enables rapid design and analysis of attack scenarios against distributed smart meters in a power distribution network. It runs the attack simulations in the cloud backend. In addition to the predictor model, we have integrated an anomaly detector to detect adversarial attacks targeting the predictor. We formulate the stealthy adversarial attacks as an optimization problem to maximize prediction lossmore »while minimizing the required perturbations. Under the worst-case setting, where the attacker has full knowledge of both the predictor and the detector, an iterative attack method has been developed to solve for the adversarial perturbation. We demonstrate the framework capabilities using a GridLAB-D based power distribution network model and show how stealthy adversarial attacks can affect smart grid prediction systems even with a partial control of network.« less
  4. Serverless computing is an emerging event-driven programming model that accelerates the development and deployment of scalable web services on cloud computing systems. Though widely integrated with the public cloud, serverless computing use is nascent for edge-based, IoT deployments. In this work, we design and develop STOIC (Serverless TeleOperable HybrId Cloud), an IoT application deployment and offloading system that extends the serverless model in three ways. First, STOIC adopts a dynamic feedback control mechanism to precisely predict latency and dispatch workloads uniformly across edge and cloud systems using a distributed serverless framework. Second, STOIC leverages hardware acceleration (e.g. GPU resources) for serverless function execution when available from the underlying cloud system. Third, STOIC can be configured in multiple ways to overcome deployment variability associated with public cloud use. Finally, we empirically evaluate STOIC using real-world machine learning applications and multi-tier IoT deployments (edge and cloud). We show that STOIC can be used for training image processing workloads (for object recognition) – once thought too resource intensive for edge deployments. We find that STOIC reduces overall execution time (response latency) and achieves placement accuracy that ranges from 92% to 97%.
  5. With the advent of remarkable development of solar power panel and inverter technology and focus on reducing greenhouse emissions, there is increased migration from fossil fuels to carbon-free energy sources (e.g., solar, wind, and geothermal). A new paradigm called Transactive Energy (TE) [3] has emerged that utilizes economic and control techniques to effectively manage Distributed Energy Resources (DERs). Another goal of TE is to improve grid reliability and efficiency. However, to evaluate various TE approaches, a comprehensive simulation tool is needed that is easy to use and capable of simulating the power-grid along with various grid operational scenarios that occur in the transactive energy paradigm. In this research, we present a web-based design and simulation platform (called a design studio) targeted toward evaluation of power-grid distribution system and transactive energy approaches [1]. The design studio allows to edit and visualize existing power-grid models graphically, create new power-grid network models, simulate those networks, and inject various scenario-specific perturbations to evaluate specific configurations of transactive energy simulations. The design studio provides (i) a novel Domain-Specific Modeling Language (DSML) using the Web-based Generic Modeling Environment (WebGME [4]) for the graphical modeling of power-grid, cyber-physical attacks, and TE scenarios, and (ii) a reusable cloud-hostedmore »simulation backend using the Gridlab-D power-grid distribution system simulation tool [2].« less