The supervisory control and data acquisition (SCADA) network in a smart grid requires to be reliable and efficient to transmit real-time data to the controller. Introducing SDN into a SCADA network helps in deploying novel grid control operations, as well as, their management. As the overall network cannot be transformed to have only SDN-enabled devices overnight because of budget constraints, a systematic deployment methodology is needed. In this work, we present a framework, named SDNSynth, that can design a hybrid network consisting of both legacy forwarding devices and programmable SDN-enabled switches. The design satisfies the resiliency requirements of the SCADA network, which are specified with respect to a set of identified threat vectors. The deployment plan primarily includes the best placements of the SDN-enabled switches. The plan may include one or more links to be installed newly. We model and implement the SDNSynth framework that includes the satisfaction of several requirements and constraints involved in the resilient operation of the SCADA. It uses satisfiability modulo theories (SMT) for encoding the synthesis model and solving it. We demonstrate SDNSynth on a case study and evaluate its performance on different synthetic SCADA systems.
more »
« less
The Cost on System Performance of Requirements on Differentiable Variables
Abstract System design is commonly thought of as a process of maximizing a design objective subject to constraints, among which are the system requirements. Given system-level requirements, a convenient management approach is to disaggregate the system into subsystems and to “flowdown” the system-level requirements to the subsystem or lower levels. We note, however, that requirements truly are constraints, and they typically impose a penalty on system performance. Furthermore, disaggregation of the system-level requirements into the flowdown requirements creates added sets of constraints, all of which have the potential to impose further penalties on overall system performance. This is a highly undesirable effect of an otherwise beneficial system design management process. This article derives conditions that may be imposed on the flowdown requirements to assure that they do not penalize overall system performance beyond the system-level requirement.
more »
« less
- Award ID(s):
- 1923164
- PAR ID:
- 10328130
- Date Published:
- Journal Name:
- Journal of Mechanical Design
- Volume:
- 143
- Issue:
- 5
- ISSN:
- 1050-0472
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Thermal interfaces are vital for effective thermal management in modern electronics, especially in the emerging fields of flexible electronics and soft robotics that impose requirements for interface materials to be soft and flexible in addition to having high thermal performance. Here, a novel sandwich‐structured thermal interface material (TIM) is developed that simultaneously possesses record‐low thermal resistance and high flexibility. Frequency‐domain thermoreflectance (FDTR) is employed to investigate the overall thermal performance of the sandwich structure. As the core of this sandwich, a vertically aligned copper nanowire (CuNW) array preserves its high intrinsic thermal conductivity, which is further enhanced by 60% via a thick 3D graphene (3DG) coating. The thin copper layers on the top and bottom play the critical roles in protecting the nanowires during device assembly. Through the bottom‐up fabrication process, excellent contacts between the graphene‐coated CuNWs and the top/bottom layer are realized, leading to minimal interfacial resistance. In total, the thermal resistance of the sandwich is determined as low as ~0.23 mm2 K W−1. This work investigates a new generation of flexible thermal interface materials with an ultralow thermal resistance, which therefore renders the great promise for advanced thermal management in a wide variety of electronics.more » « less
-
We present a principal-agent model of a one-shot, shallow, systems engineering process. The process is "one-shot" in the sense that decisions are made during a one-time step and that they are final. The term "shallow" refers to a one-layer hierarchy of the process. Specifically, we assume that the systems engineer has already decomposed the problem in subsystems and that each subsystem is assigned to a different subsystem engineer. Each subsystem engineer works independently to maximize their own expected payoff. The goal of the systems engineer is to maximize the system-level payoff by incentivizing the subsystem engineers. We restrict our attention to requirements-based system-level payoffs, i.e., the systems engineer makes a profit only if all the design requirements are met. We illustrate the model using the design of an Earth-orbiting satellite system where the systems engineer determines the optimum incentive structures and requirements for two subsystems: the propulsion subsystem and the power subsystem. The model enables the analysis of a systems engineer's decisions about optimal passed-down requirements and incentives for sub-system engineers under different levels of task difficulty and associated costs. Sample results, for the case of risk-neutral systems and subsystems engineers, show that it is not always in the best interest of the systems engineer to pass down the true requirements. As expected, the model predicts that for small to moderate task uncertainties the optimal requirements are higher than the true ones, effectively eliminating the probability of failure for the systems engineer. In contrast, the model predicts that for large task uncertainties the optimal requirements should be smaller than the true ones in order to lure the subsystem engineers into participation.more » « less
-
We consider load balancing in large-scale heterogeneous server systems in the presence of data locality that imposes constraints on which tasks can be assigned to which servers. The constraints are naturally captured by a bipartite graph between the servers and the dispatchers handling assignments of various arrival flows. When a task arrives, the corresponding dispatcher assigns it to a server with the shortest queue among [Formula: see text] randomly selected servers obeying these constraints. Server processing speeds are heterogeneous, and they depend on the server type. For a broad class of bipartite graphs, we characterize the limit of the appropriately scaled occupancy process, both on the process level and in steady state, as the system size becomes large. Using such a characterization, we show that imposing data locality constraints can significantly improve the performance of heterogeneous systems. This is in stark contrast to either heterogeneous servers in a full flexible system or data locality constraints in systems with homogeneous servers, both of which have been observed to degrade the system performance. Extensive numerical experiments corroborate the theoretical results. Funding: This work was partially supported by the National Science Foundation [CCF. 07/2021–06/2024].more » « less
-
We investigate how sequential decision making analysis can be used for modeling system resilience. In the aftermath of an extreme event, agents involved in the emergency management aim at an optimal recovery process, trading off the loss due to lack of system functionality with the investment needed for a fast recovery. This process can be formulated as a sequential decision-making optimization problem, where the overall loss has to be minimized by adopting an appropriate policy, and dynamic programming applied to Markov Decision Processes (MDPs) provides a rational and computationally feasible framework for a quantitative analysis. The paper investigates how trends of post-event loss and recovery can be understood in light of the sequential decision making framework. Specifically, it is well known that system’s functionality is often taken to a level different from that before the event: this can be the result of budget constraints and/or economic opportunity, and the framework has the potential of integrating these considerations. But we focus on the specific case of an agent learning something new about the process, and reacting by updating the target functionality level of the system. We illustrate how this can happen in a simplified setting, by using Hidden-Model MPDs (HM-MDPs) for modelling the management of a set of components under model uncertainty. When an extreme event occurs, the agent updates the hazard model and, consequently, her response and long-term planning.more » « less
An official website of the United States government

