Context. Several Research areas emerged and have been proceeding independently when in fact they have much in common. These include: mutant subsumption and mutant set minimization; relative correctness and the semantic definition of faults; differentiator sets and their application to test diversity; generate-and--validate methods of program repair; test suite coverage metrics. Objective. Highlight their analogies, commonalities and overlaps; explore their potential for synergy and shared research goals; unify several disparate concepts around a minimal set of artifacts. Method. Introduce and analyze a minimal set of concepts that enable us to model these disparate research efforts, and explore how these models may enable us to share insights between different research directions, and advance their respective goals. Results. Capturing absolute (total and partial) correctness and relative (total and partial) correctness with a single concept: Detector sets. Using the same concept to quantify the effectiveness of test suites, and prove that the proposed measure satisfies appealing monotonicity properties. Using the measure of test suite effectiveness to model mutant set minimization as an optimization problem, characterized by an objective function and a constraint. Generalizing the concept of mutant subsumption using the concept of differentiator sets. Identifying analogies between detector sets and differentiator sets, and inferring relationships between subsumption and relative correctness. Conclusion. This paper does not aim to answer any pressing research question as much as it aims to raise research questions that use the insights gained from one research venue to gain a fresh perspective on a related research issue. mutant subsumption; mutant set minimization; relative correctness; absolute correctness; total correctness; partial correctness; program fault; program repair; differentiator set; detector set.
more »
« less
Semantic Coverage: Measuring Test Suite Effectiveness [Semantic Coverage: Measuring Test Suite Effectiveness]
To reduce the cost of mutation testing, researchers have sought to find minimal mutant sets. As an optimization problem, mutant set minimization is defined by two parameters: the objective function that we must optimize; and the constraint under which the optimization is carried out. Whereas the objective function of this optimization problem is clear (minimizing the cardinality of the mutant set), the constraint under which this optimization is attempted has not been clearly articulated in the literature. In this paper, we propose a formal definition of this constraint and discuss in what sense, and to what extent, published algorithms of mutant set minimization comply with this constraint.
more »
« less
- Award ID(s):
- 2043104
- PAR ID:
- 10538749
- Publisher / Repository:
- SCITEPRESS - Science and Technology Publications
- Date Published:
- ISBN:
- 978-989-758-665-1
- Page Range / eLocation ID:
- 287 to 294
- Subject(s) / Keyword(s):
- software testing test suite effectiveness syntactic coverage mutation coverage semantic coverage
- Format(s):
- Medium: X
- Location:
- Rome, Italy
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We study the polyhedral convex hull structure of a mixed-integer set which arises in a class of cardinality-constrained concave submodular minimization problems. This class of problems has an objective function in the form of $f(a^Tx)$, where f is a univariate concave function, a is a non-negative vector, and x is a binary vector of appropriate dimension. Such minimization problems frequently appear in applications that involve risk-aversion or economies of scale. We propose three classes of strong valid linear inequalities for this convex hull and specify their facet conditions when a has two distinct values. We show how to use these inequalities to obtain valid inequalities for general a that contains multiple values. We further provide a complete linear convex hull description for this mixed-integer set when a contains two distinct values and the cardinality constraint upper bound is two. Our computational experiments on the mean-risk optimization problem demonstrate the effectiveness of the proposed inequalities in a branch-and-cut framework.more » « less
-
Braverman, Mark (Ed.)We develop approximation algorithms for set-selection problems with deterministic constraints, but random objective values, i.e., stochastic probing problems. When the goal is to maximize the objective, approximation algorithms for probing problems are well-studied. On the other hand, few techniques are known for minimizing the objective, especially in the adaptive setting, where information about the random objective is revealed during the set-selection process and allowed to influence it. For minimization problems in particular, incorporating adaptivity can have a considerable effect on performance. In this work, we seek approximation algorithms that compare well to the optimal adaptive policy. We develop new techniques for adaptive minimization, applying them to a few problems of interest. The core technique we develop here is an approximate reduction from an adaptive expectation minimization problem to a set of adaptive probability minimization problems which we call threshold problems. By providing near-optimal solutions to these threshold problems, we obtain bicriteria adaptive policies. We apply this method to obtain an adaptive approximation algorithm for the Min-Element problem, where the goal is to adaptively pick random variables to minimize the expected minimum value seen among them, subject to a knapsack constraint. This partially resolves an open problem raised in [Goel et al., 2010]. We further consider three extensions on the Min-Element problem, where our objective is the sum of the smallest k element-weights, or the weight of the min-weight basis of a given matroid, or where the constraint is not given by a knapsack but by a matroid constraint. For all three of the variations we explore, we develop adaptive approximation algorithms for their corresponding threshold problems, and prove their near-optimality via coupling arguments.more » « less
-
In a chance constrained program (CCP), decision makers seek the best decision whose probability of violating the uncertainty constraints is within the prespecified risk level. As a CCP is often nonconvex and is difficult to solve to optimality, much effort has been devoted to developing convex inner approximations for a CCP, among which the conditional value-at-risk (CVaR) has been known to be the best for more than a decade. This paper studies and generalizes the ALSO-X, originally proposed by Ahmed, Luedtke, SOng, and Xie in 2017 , for solving a CCP. We first show that the ALSO-X resembles a bilevel optimization, where the upper-level problem is to find the best objective function value and enforce the feasibility of a CCP for a given decision from the lower-level problem, and the lower-level problem is to minimize the expectation of constraint violations subject to the upper bound of the objective function value provided by the upper-level problem. This interpretation motivates us to prove that when uncertain constraints are convex in the decision variables, ALSO-X always outperforms the CVaR approximation. We further show (i) sufficient conditions under which ALSO-X can recover an optimal solution to a CCP; (ii) an equivalent bilinear programming formulation of a CCP, inspiring us to enhance ALSO-X with a convergent alternating minimization method (ALSO-X+); and (iii) an extension of ALSO-X and ALSO-X+ to distributionally robust chance constrained programs (DRCCPs) under the ∞−Wasserstein ambiguity set. Our numerical study demonstrates the effectiveness of the proposed methods.more » « less
-
Trauma injuries continue to be the leading cause of mortality and morbidity among US citizens aged 44 years and under. Government agencies are often in charge of designing an effective trauma network in their region to provide prompt and definitive care to their citizens. This process is, however, largely manual, experience-based and often leads to a suboptimal network in terms of patient safety. To support effective decision making, we propose a Nested Trauma Network Design Problem (NTNDP), which can be characterized as a nested multi-level, multi-customer, multi-transportation, multi-criteria, capacitated model with the bi-objective of maximizing the weighted sum of equity and effectiveness in patient safety. We use mistriages (system-related under- and over-triages) as surrogates for patient safety. To add realism, we include intermediate trauma centers that are set up in many states in the US to serve as feeder centers to major trauma centers to improve patient safety and three criteria to mimic EMS’s on-scene decisions. We propose a ‘3-phase’ solution approach that first solves a relaxed version of the model, then solves a Constraint Satisfaction Problem, and then a modified version of the original optimization problem (if needed), all using a commercial solver. Our findings suggest that solutions are sensitive to (i) the proportion of assignments attributed to various destination determination criteria, (ii) distribution of trauma patients, and (iii) relative emphasis on equity vs. effectiveness. We also illustrate the use of our approach using real data from a midwestern US state; results show over 30% performance improvement in the objective value.more » « less
An official website of the United States government

