Modern software systems are deployed in a highly dynamic, uncertain environment. Ideally, a system that is robust should be capable of establishing its most critical requirements even in the presence of possible deviations in the environment. We propose a technique called behavioral robustification, which involves systematically and rigorously improving the robustness of a design against potential deviations. Given behavioral models of a system and its environment, along with a set of user-specified deviations, our robustification method produces a redesign that is capable of satisfying a desired property even when the environment exhibits those deviations. In particular, we describe how the robustification problem can be formulated as a multi-objective optimization problem, where the goal is to restrict the deviating environment from causing a violation of a desired property, while maximizing the amount of existing functionality and minimizing the cost of changes to the original design. We demonstrate the effectiveness of our approach on case studies involving the robustness of an electronic voting machine and safety-critical interfaces.
more »
« less
A Behavioral Notion of Robustness for Software Systems
Software systems are designed and implemented with assumptions about the environment. However, once the system is deployed,the actual environment may deviate from its expected behavior,possibly undermining desired properties of the system. To enable systematic design of systems that are robust against potential environmental deviations, we propose a rigorous notion of robustness for software systems. In particular, the robustness of a system is de-fined as the largest set of deviating environmental behaviors under which the system is capable of guaranteeing a desired property. We describe a new set of design analysis problems based on our notion of robustness, and a technique for automatically computing robustness of a system given its behavior description. We demonstrate potential applications of our robustness notion on two case studies involving network protocols and safety-critical interfaces.
more »
« less
- Award ID(s):
- 1801546
- PAR ID:
- 10169476
- Date Published:
- Journal Name:
- Proceedings of The 28thACM Joint European Software Engineering Conference and Symposium on theFoundations of Software Engineering (ESEC/FSE 2020)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Effective stormwater management requires systems that operate safely and deliver improved environmental outcomes in a cost-effective manner. However, current design practices typically evaluate performance assuming that a given system starts empty and operates independently from nearby stormwater infrastructure. There is a conspicuous need for more realistic design-phase indicators of performance that consider a larger set of initial conditions and the effects of coupled dynamics. To this end, we apply a control-theoretic method, called reachability analysis, to produce a more objective measure of system robustness. We seek two primary contributions in this work. First, we demonstrate how the application of reachability analysis to a dynamic model of a stormwater network can characterize the set of initial conditions from which every element in the network can avoid overflowing under a given surface runoff signal of finite duration. This is, to the authors’ knowledge, the first published application of reachability analysis to stormwater systems. Our second contribution is to offer an interpretation of the outcomes of the proposed reachability analysis as a measure of system robustness that can provide useful information when making critical design decisions. We illustrate the effectiveness of this method in revealing the trade-offs of particular design choices relative to a desired level of robustness.more » « less
-
Many Cyber-Physical Systems (CPS) have timing constraints that must be met by the cyber components (software and the network) to ensure safety. It is a tedious job to check if a CPS meets its timing requirement especially when they are distributed and the software and/or the underlying computing platforms are complex. Furthermore, the system design is brittle since a timing failure can still happen e.g., network failure, soft error bit flip, etc. In this paper, we propose a new design methodology called Plan B where timing constraints of the CPS are monitored at the runtime, and a proper backup routine is executed when a timing failure happens to ensure safety. We provide a model on how to express the desired timing behavior using a set of timing constructs in a C/C++ code and how to efficiently monitor them at the runtime. We showcase the effectiveness of our approach by conducting experiments on three case studies: 1) the full software stack for autonomous driving (Apollo), 2) a multi-agent system with 1/10th scale model robots, and 3) a quadrotor for search and rescue application. We show that the system remains safe and stable even when intentional faults are injected to cause a timing failure. We also demonstrate that the system can achieve graceful degradation when a less extreme timing failure happens.more » « less
-
null (Ed.)Abstract Behavior involves the ongoing interaction between an organism and its environment. One of the prevailing theories of adaptive behavior is that organisms are constantly making predictions about their future environmental stimuli. However, how they acquire that predictive information is still poorly understood. Two complementary mechanisms have been proposed: predictions are generated from an agent’s internal model of the world or predictions are extracted directly from the environmental stimulus. In this work, we demonstrate that predictive information, measured using bivariate mutual information, cannot distinguish between these two kinds of systems. Furthermore, we show that predictive information cannot distinguish between organisms that are adapted to their environments and random dynamical systems exposed to the same environment. To understand the role of predictive information in adaptive behavior, we need to be able to identify where it is generated. To do this, we decompose information transfer across the different components of the organism-environment system and track the flow of information in the system over time. To validate the proposed framework, we examined it on a set of computational models of idealized agent-environment systems. Analysis of the systems revealed three key insights. First, predictive information, when sourced from the environment, can be reflected in any agent irrespective of its ability to perform a task. Second, predictive information, when sourced from the nervous system, requires special dynamics acquired during the process of adapting to the environment. Third, the magnitude of predictive information in a system can be different for the same task if the environmental structure changes.more » « less
-
Modern cyber-physical systems (CPS) are often developed in a model-based development (MBD) paradigm. The MBD paradigm involves the construction of different kinds of models: (1) a plant model that encapsulates the physical components of the system (e.g., mechanical, electrical, chemical components) using representations based on differential and algebraic equations, (2) a controller model that encapsulates the embedded software components of the system, and (3) an environment model that encapsulates physical assumptions on the external environment of the CPS application. In order to reason about the correctness of CPS applications, we typically pose the following question: For all possible environment scenarios, does the closed-loop system consisting of the plant and the controller exhibit the desired behavior? Typically, the desired behavior is expressed in terms of properties that specify unsafe behaviors of the closed-loop system. Often, such behaviors are expressed using variants of real-time temporal logics. In this chapter, we will examine formal methods based on bounded-time reachability analysis, simulation-guided reachability analysis, deductive techniques based on safety invariants, and formal, requirement-driven testing techniques. We will review key results in the literature, and discuss the scalability and applicability of such systems to various academic and industrial contexts. We conclude this chapter by discussing the challenge to formal verification and testing techniques posed by newer CPS applications that use AI-based software components.more » « less
An official website of the United States government

