Summary Many non‐homogeneous Poisson process software reliability growth models (SRGM) are characterized by a single continuous curve. However, failures are driven by factors such as the testing strategy and environment, integration testing and resource allocation, which can introduce one or more changepoint into the fault detection process. Some researchers have proposed non‐homogeneous Poisson process SRGM, but only consider a common failure distribution before and after changepoints. This paper proposes a heterogeneous single changepoint framework for SRGM, which can exhibit different failure distributions before and after the changepoint. Combinations of two simple and distinct curves including an exponential and S‐shaped curve are employed to illustrate the concept. Ten data sets are used to compare these heterogeneous models against their homogeneous counterparts. Experimental results indicate that heterogeneous changepoint models achieve better goodness‐of‐fit measures on 60% and 80% of the data sets with respect to the Akaike information criterion and predictive sum of squares measures.
more »
« less
A Family of Software Reliability Models with Bathtub-Shaped Fault Detection Rate
Researchers have proposed several software reliability growth models, many of which possess complex parametric forms. In practice, software reliability growth models should exhibit a balance between predictive accuracy and other statistical measures of goodness of fit, yet past studies have not always performed such balanced assessment. This paper proposes a framework for software reliability growth models possessing a bathtub-shaped fault detection rate and derives stable and efficient expectation conditional maximization algorithms to enable the fitting of these models. The stages of the bathtub are interpreted in the context of the software testing process. The illustrations compare multiple bathtub-shaped and reduced model forms, including classical models with respect to predictive and information theoretic measures. The results indicate that software reliability growth models possessing a bathtub-shaped fault detection rate outperformed classical models on both types of measures. The proposed framework and models may therefore be a practical compromise between model complexity and predictive accuracy.
more »
« less
- Award ID(s):
- 1749635
- PAR ID:
- 10336167
- Date Published:
- Journal Name:
- International Journal of Reliability, Quality and Safety Engineering
- Volume:
- 28
- Issue:
- 05
- ISSN:
- 0218-5393
- Page Range / eLocation ID:
- 2150034
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Recent research applies soft computing techniques to fit software reliability growth models. However, runtime performance and the distribution of the distance from an optimal solution over multiple runs must be explicitly considered to justify the practical utility of these approaches, promote comparison, and support reproducible research. This paper presents a meta-optimization framework to design stable and efficient multi-phase algorithms for fitting software reliability growth models. The approach combines initial parameter estimation techniques from statistical algorithms, the global search properties of soft computing, and the rapid convergence of numerical methods. Designs that exhibit the best balance between runtime performance and accuracy are identified. The approach is illustrated through nonhomogeneous Poisson process and covariate software reliability growth models, including a cross-validation step on data sets not used to identify designs. The results indicate the nonhomogeneous Poisson process model considered is too simple to benefit from soft computing because it incurs additional runtime with no increase in accuracy attained. However, a multi-phase design for the covariate software reliability growth model consisting of the bat algorithm followed by a numerical method achieves better performance and converges consistently, compared to a numerical method only. The proposed approach supports higher dimensional covariate software reliability growth model fitting suitable for implementation in a tool.more » « less
-
null (Ed.)Traditional software reliability growth models only consider defect discovery data, yet the practical concern of software engineers is the removal of these defects. Most attempts to model the relationship between defect discovery and resolution have been restricted to differential equation-based models associated with these two activities. However, defect tracking databases offer a practical source of information on the defect lifecycle suitable for more complete reliability and performance models. This paper explicitly connects software reliability growth models to software defect tracking. Data from a NASA project has been employed to develop differential equation-based models of defect discovery and resolution as well as distributional and Markovian models of defect resolution. The states of the Markov model represent thirteen unique stages of the NASA software defect lifecycle. Both state transition probabilities and transition time distributions are computed from the defect database. Illustrations compare the predictive and computational performance of alternative approaches. The results suggest that the simple distributional approach achieves the best tradeoff between these two performance measures, but that enhanced data collection practices could improve the utility of the more advanced approaches and the inferences they enable.more » « less
-
null (Ed.)With the increased interest to incorporate machine learning into software and systems, methods to characterize the impact of the reliability of machine learning are needed to ensure the reliability of the software and systems in which these algorithms reside. Towards this end, we build upon the architecture-based approach to software reliability modeling, which represents application reliability in terms of the component reliabilities and the probabilistic transitions between the components. Traditional architecture-based software reliability models consider all components to be deterministic software. We therefore extend this modeling approach to the case, where some components represent learning enabled components. Here, the reliability of a machine learning component is interpreted as the accuracy of its decisions, which is a common measure of classification algorithms. Moreover, we allow these machine learning components to be fault-tolerant in the sense that multiple diverse classifier algorithms are trained to guide decisions and the majority decision taken. We demonstrate the utility of the approach to assess the impact of machine learning on software reliability as well as illustrate the concept of reliability growth in machine learning. Finally, we validate past analytical results for a fault tolerant system composed of correlated components with real machine learning algorithms and data, demonstrating the analytical expression’s ability to accurately estimate the reliability of the fault tolerant machine learning component and subsequently the architecture-based software within which it resides.more » « less
-
Abstract The ability to accurately measure recovery rate of infrastructure systems and communities impacted by disasters is vital to ensure effective response and resource allocation before, during, and after a disruption. However, a challenge in quantifying such measures resides in the lack of data as community recovery information is seldom recorded. To provide accurate community recovery measures, a hierarchical Bayesian kernel model (HBKM) is developed to predict the recovery rate of communities experiencing power outages during storms. The performance of the proposed method is evaluated using cross‐validation and compared with two models, the hierarchical Bayesian regression model and the Poisson generalized linear model. A case study focusing on the recovery of communities in Shelby County, Tennessee after severe storms between 2007 and 2017 is presented to illustrate the proposed approach. The predictive accuracy of the models is evaluated using the log‐likelihood and root mean squared error. The HBKM yields on average the highest out‐of‐sample predictive accuracy. This approach can help assess the recoverability of a community when data are scarce and inform decision making in the aftermath of a disaster. An illustrative example is presented demonstrating how accurate measures of community resilience can help reduce the cost of infrastructure restoration.more » « less
An official website of the United States government

