skip to main content


Title: Using semi-supervised learning for predicting metamorphic relations
Software testing is difficult to automate, especially in programs which have no oracle, or method of determining which output is correct. Metamorphic testing is a solution this problem. Metamorphic testing uses metamorphic relations to define test cases and expected outputs. A large amount of time is needed for a domain expert to determine which metamorphic relations can be used to test a given program. Metamorphic relation prediction removes this need for such an expert. We propose a method using semi-supervised machine learning to detect which metamorphic relations are applicable to a given code base. We compare this semi-supervised model with a supervised model, and show that the addition of unlabeled data improves the classification accuracy of the MR prediction model.  more » « less
Award ID(s):
1656877
NSF-PAR ID:
10062927
Author(s) / Creator(s):
;
Date Published:
Journal Name:
The 3rd International Workshop on Metamorphic Testing
Page Range / eLocation ID:
14 to 17
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In machine learning, supervised classifiers are used to obtain predictions for unlabeled data by inferring prediction functions using labeled data. Supervised classifiers are widely applied in domains such as computational biology, computational physics and healthcare to make critical decisions. However, it is often hard to test supervised classifiers since the expected answers are unknown. This is commonly known as the oracle problem and metamorphic testing (MT) has been used to test such programs. In MT, metamorphic relations (MRs) are developed from intrinsic characteristics of the software under test (SUT). These MRs are used to generate test data and to verify the correctness of the test results without the presence of a test oracle. Effectiveness of MT heavily depends on the MRs used for testing. In this paper we have conducted an extensive empirical study to evaluate the fault detection effectiveness of MRs that have been used in multiple previous studies to test supervised classifiers. Our study uses a total of 709 reachable mutants generated by multiple mutation engines and uses data sets with varying characteristics to test the SUT. Our results reveal that only 14.8% of these mutants are detected using the MRs and that the fault detection effectiveness of these MRs do not scale with the increased number of mutants when compared to what was reported in previous studies. 
    more » « less
  2. Matrices often represent important information in scientific applications and are involved in performing complex calculations. But systematically testing these applications is hard due to the oracle problem. Metamorphic testing is an effective approach to test such applications because it uses metamorphic relations to determine whether test cases have passed or failed. Metamorphic relations are typically identified with the help of a domain expert and is a labor intensive task. In this work we use a graph kernel based machine learning approach to predict metamorphic relations for matrix calculation programs. Previously, this graph kernel based machine learning approach was used to successfully predict metamorphic relations for programs that perform numerical calculations. Results of this study show that this approach can be used to predict metamorphic relations for matrix calculation programs as well. 
    more » « less
  3. Summary

    Metamorphic testing (MT) is widely used for testing programs that face the oracle problem. It uses a set of metamorphic relations (MRs), which are relations among multiple inputs and their corresponding outputs to determine whether the program under test is faulty. Typically, MRs vary in their ability to detect faults in the program under test, and some MRs tend to detect the same set of faults. In this paper, we propose approaches to prioritize MRs to improve the efficiency and effectiveness of MT for regression testing. We present two MR prioritization approaches: (i) fault‐based and (ii) coverage‐based. To evaluate these MR prioritization approaches, we conduct experiments on three complex open‐source software systems. Our results show that the MR prioritization approaches developed by us significantly outperform the current practice of executing the source and follow‐up test cases of the MRs in an ad‐hoc manner in terms of fault detection effectiveness. Further, fault‐based MR prioritization leads to reducing the number of source and follow‐up test cases that needs to be executed as well as reducing the average time taken to detect a fault, which would result in saving time and cost during the testing process.

     
    more » « less
  4. Data poisoning aims to compromise a machine learning based software component by contaminating its training set to change its prediction results for test inputs. Existing methods for deciding data-poisoning robustness have either poor accuracy or long running time and, more importantly, they can only certify some of the truly-robust cases, but remain inconclusive when certification fails. In other words, they cannot falsify the truly-non-robust cases. To overcome this limitation, we propose a systematic testing based method, which can falsify as well as certify data-poisoning robustness for a widely used supervised-learning technique named k-nearest neighbors (KNN). Our method is faster and more accurate than the baseline enumeration method, due to a novel over-approximate analysis in the abstract domain, to quickly narrow down the search space, and systematic testing in the concrete domain, to find the actual violations. We have evaluated our method on a set of supervised-learning datasets. Our results show that the method significantly outperforms state-of-the-art techniques, and can decide data-poisoning robustness of KNN prediction results for most of the test inputs. 
    more » « less
  5. The problem of real time prediction of blood glucose (BG) levels based on the readings from a continuous glucose monitoring (CGM) device is a problem of great importance in diabetes care, and therefore, has attracted a lot of research in recent years, especially based on machine learning. An accurate prediction with a 30, 60, or 90 min prediction horizon has the potential of saving millions of dollars in emergency care costs. In this paper, we treat the problem as one of function approximation, where the value of the BG level at time t + h (where h the prediction horizon) is considered to be an unknown function of d readings prior to the time t . This unknown function may be supported in particular on some unknown submanifold of the d -dimensional Euclidean space. While manifold learning is classically done in a semi-supervised setting, where the entire data has to be known in advance, we use recent ideas to achieve an accurate function approximation in a supervised setting; i.e., construct a model for the target function. We use the state-of-the-art clinically relevant PRED-EGA grid to evaluate our results, and demonstrate that for a real life dataset, our method performs better than a standard deep network, especially in hypoglycemic and hyperglycemic regimes. One noteworthy aspect of this work is that the training data and test data may come from different distributions. 
    more » « less