A self-adaptive system (SAS) can reconfigure at run time in response to adverse combinations of system and environmental conditions in order to continuously satisfy its requirements. Moreover, SASs are subject to cross-cutting non-functional requirements (NFRs), such as performance, security, and usability, that collectively characterize how functional requirements (FRs) are to be satisfied. In many cases, the trigger for adapting an SAS may be due to a violation of one or more NFRs. For a given NFR, different combinations of hierarchically-organized FRs may yield varying degrees of satisfaction (i.e., satisficement). This paper presents Providentia, a search-based technique to optimize NFR satisficement when subjected to various sources of uncertainty (e.g., environment, interactions between system elements, etc.). Providentia searches for optimal combinations of FRs that, when considered with different subgoal decompositions and/or differential weights, provide optimal satisficement of NFR objectives. Experimental results suggest that using an SAS goal model enhanced with search-based optimization significantly improves system performance when compared with manually and randomly-generated weights and subgoals.
more »
« less
An Empirical Analysis of the Mutation Operator for Run-Time Adaptive Testing in Self-Adaptive Systems
A self-adaptive system (SAS) can reconfigure at run time in response to uncertainty and/or adversity to continually deliver an acceptable level of service. An SAS can experience uncertainty during execution in terms of environmental conditions for which it was not explicitly designed as well as unanticipated combinations of system parameters that result from a self-reconfiguration or misunderstood requirements. Run-time testing provides assurance that an SAS continually behaves as it was designed even as the system reconfigures and the environment changes. Moreover, introducing adaptive capabilities via lightweight evolutionary algorithms into a run-time testing framework can enable an SAS to effectively update its test cases in response to uncertainty alongside the SAS's adaptation engine while still maintaining assurance that requirements are being satisfied. However, the impact of the evolutionary parameters that configure the search process for run-time testing may have a significant impact on test results. Therefore, this paper provides an empirical study that focuses on the mutation parameter that guides online evolution as applied to a run-time testing framework, in the context of an SAS.
more »
« less
- Award ID(s):
- 1657061
- PAR ID:
- 10088803
- Date Published:
- Journal Name:
- 2018 IEEE/ACM 11th International Workshop on Search-Based Software Testing (SBST)
- Page Range / eLocation ID:
- 59-66
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Power capping is an important technique for high-density servers to safely oversubscribe the power infrastructure in a data center. However, power capping is commonly accomplished by dynamically lowering the server processors’ frequency levels, which can result in degraded application performance. For servers that run important machine learning (ML) applications with Service-Level Objective (SLO) requirements, inference performance such as recognition accuracy must be optimized within a certain latency constraint, which demands high server performance. In order to achieve the best inference accuracy under the desired latency and server power constraints, this paper proposes OptimML, a multi-input-multi-output (MIMO) control framework that jointly controls both inference latency and server power consumption, by flexibly adjusting the machine learning model size (and so its required computing resources) when server frequency needs to be lowered for power capping. Our results on a hardware testbed with widely adopted ML framework (including PyTorch, TensorFlow, and MXNet) show that OptimML achieves higher inference accuracy compared with several well-designed baselines, while respecting both latency and power constraints. Furthermore, an adaptive control scheme with online model switching and estimation is designed to achieve analytic assurance of control accuracy and system stability, even in the face of significant workload/hardware variations.more » « less
-
This paper presents Latency Management Executor (LaME), a theory-guided adaptive scheduling framework that enhances real-time performance in ROS 2 through dynamic resource allocation and hybrid priority-driven scheduling. LaME introduces the concept of threadclasses to dynamically adjust system configurations, ensuring response-time guarantees for real-time chains while maintaining starvation freedom for best-effort chains. By implementing adaptive resource allocation and continuous runtime monitoring, LaME provides robust response times even under fluctuating workloads and resource constraints. We implement our framework for the Autoware reference system and perform our evaluation on an Nvidia Jetson platform. Our results demonstrate that LaME successfully adapts to changing resource availability and workload surges, and effectively balances real-time guarantees with overall system throughput.more » « less
-
As part of its ongoing efforts to meet the increased spectrum demand, the Federal Communications Commission (FCC) has recently opened up 150 MHz in the 3.5 GHz band for shared wireless broadband use. Access and operations in this band, aka Citizens Broadband Radio Service (CBRS), will be managed by a dynamic spectrum access system (SAS) to enable seamless spectrum sharing between secondary users (SU s) and incumbent users. Despite its benefits, SAS’s design requirements, as set by FCC, present privacy risks to SU s, merely because SU s are required to share sensitive operational information (e.g., location, identity, spectrum usage) with SAS to be able to learn about spectrum availability in their vicinity. In this paper, we propose TrustSAS, a trustworthy framework for SAS that synergizes state-of-the-art cryptographic techniques with blockchain technology in an innovative way to address these privacy issues while complying with FCC’s regulatory design requirements. We analyze the security of our framework and evaluate its performance through analysis, simulation and experimentation. We show that TrustSAS can offer high security guarantees with reasonable overhead, making it an ideal solution for addressing SU s’ privacy issues in an operational SAS environment.more » « less
-
Abstract While fiducial inference was widely considered a big blunder by R.A. Fisher, the goal he initially set—‘inferring the uncertainty of model parameters on the basis of observations’—has been continually pursued by many statisticians. To this end, we develop a new statistical inference method called extended Fiducial inference (EFI). The new method achieves the goal of fiducial inference by leveraging advanced statistical computing techniques while remaining scalable for big data. Extended Fiducial inference involves jointly imputing random errors realized in observations using stochastic gradient Markov chain Monte Carlo and estimating the inverse function using a sparse deep neural network (DNN). The consistency of the sparse DNN estimator ensures that the uncertainty embedded in observations is properly propagated to model parameters through the estimated inverse function, thereby validating downstream statistical inference. Compared to frequentist and Bayesian methods, EFI offers significant advantages in parameter estimation and hypothesis testing. Specifically, EFI provides higher fidelity in parameter estimation, especially when outliers are present in the observations; and eliminates the need for theoretical reference distributions in hypothesis testing, thereby automating the statistical inference process. Extended Fiducial inference also provides an innovative framework for semisupervised learning.more » « less
An official website of the United States government

