skip to main content


Title: In-Situ Concolic Testing of JavaScript
JavaScript (JS) has evolved into a versatile and popular programming language for not only the web, but also a wide range of server-side and client-side applications. Effective, efficient, and easy-to-use testing techniques for JS scripts are in great demand. In this paper, we introduce a holistic approach to applying concolic testing to JS scripts in-situ, i.e., JS scripts are executed in their native environments as part of concolic execution and test cases generated are directly replayed in these environments. We have implemented this approach in the context of Node.js, a JS runtime built on top of Chrome’s V8 JS engine, and evaluated its effectiveness and efficiency through application to 180 Node.js libraries with heavy use of string operations. For 85% of these libraries, it achieved statement coverage ranging between 75% and 100%, a close match in coverage with the hand-crafted unit test suites accompanying their NPM releases. Our approach detected numerous exceptions in these libraries. We analyzed the exception reports for 12 representative libraries and found 6 bugs in these libraries, 4 of which are previously undetected. The bug reports and patches that we filed for these bugs have been accepted by the library developers on GitHub.  more » « less
Award ID(s):
1908571
NSF-PAR ID:
10477207
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)
Format(s):
Medium: X
Location:
Macao SAR, China
Sponsoring Org:
National Science Foundation
More Like this
  1. JavaScript has become the most popular programming language for web front-end development. With such popularity, there is a great demand for thorough testing of client-side JavaScript web applications. In this paper, we present a novel approach to concolic testing of front-end JavaScript web applications. This approach leverages widely used JavaScript testing frameworks such as Jest and Puppeteer and conducts concolic execution on JavaScript functions in web applications for unit testing. The seamless integration of concolic testing with these testing frameworks allows injection of symbolic variables within the native execution context of a JavaScript web function and precise capture of concrete execution traces of the function under test. Such concise execution traces greatly improve the effectiveness and efficiency of the subsequent symbolic analysis for test generation. We have implemented our approach on Jest and Puppeteer. The application of our Jest implementation on Metamask, one of the most popular Crypto wallets, has uncovered 3 bugs and 1 test suite improvement, whose bug reports have all been accepted by Metamask developers on Github. We also applied our Puppeteer implementation to 21 Github projects and detected 4 bugs. 
    more » « less
  2. When developers make changes to their code, they typically run regression tests to detect if their recent changes (re)introduce any bugs. However, many tests are flaky, and their outcomes can change non-deterministically, failing without apparent cause. Flaky tests are a significant nuisance in the development process, since they make it more difficult for developers to trust the outcome of their tests. The traditional approach to identify flaky tests is to rerun them multiple times: if a test is observed both passing and failing on the same code, it is definitely flaky. We conducted a very large empirical study looking for flaky tests by rerunning the test suites of 24 projects 10,000 times each, and found that even with this many reruns, some flaky tests were still not detected. We propose FlakeFlagger, a novel approach that collects a set of features describing the behavior of each test, and then predicts tests that are likely to be flaky based on similar behavioral features. We found that FlakeFlagger correctly labeled at least as many tests as flaky as a state-of-the-art flaky test classifier, but that FlakeFlagger reported far fewer false positives (an increase in precision from just 11% to 60%). This lower false positive rate translates directly to saved time for researchers and developers who use the classification result to guide more expensive flaky test detection processes. By investigating the information gain of each feature, we conclude that test execution time, overall test coverage, coverage of recently changed lines and usage of third party libraries are effective predictors of test flakiness. We did not find any keywords or tokens in the source code of tests that were effective in predicting test flakiness, and did not find the presence of test smells to be effective in predicting test flakiness.

    This archive contains the dataset that we collected of flaky tests, along with the features that we collected from each test.

    Contents:
    Project_Info.csv: List of projects and their revisions studied
    build-logs-<project-slug>.tgz: An archive of all of the maven build logs from each of the 10,000 runs of that project's test suite. 
    failing-test-reports-<project-slug>.tgz An archive of all of the surefire XML reports for each failing test of each build of each project.
    test_results.csv: Summary of the number of passing and failing runs for each test in each project. 
    "Run ID" is a key into the <project-slug>.tgz archive also in this artifact, which refers to the run that we observed the test fail on.
    test_features.csv: Summary of the features that each test had, as per our feature detectors described in the paper
    flakeflagger-code.zip: All scripts used to generate and process these results. These scripts are also located at https://github.com/AlshammariA/FlakeFlagger

     
    more » « less
  3. In the past decade, Deep Learning (DL) systems have been widely deployed in various application domains to facilitate our daily life, e.g., natural language processing, healthcare, activity recognition, and autonomous driving. Meanwhile, it is extremely challenging to ensure the correctness of DL systems (e.g., due to their intrinsic nondeterminism), and bugs in DL systems can cause serious consequences and may even threaten human lives. In the literature, researchers have explored various techniques to test, analyze, and verify DL models, since their quality directly affects the corresponding system behaviors. Recently, researchers have also proposed novel techniques for testing the underlying operator-level DL libraries (such as TensorFlow and PyTorch), which provide general binary implementations for each high-level DL operator and are the foundation for running DL models on different hardware platforms. However, there is still limited work targeting the reliability of the emerging tensor compilers (also known as DL compilers), which aim to automatically compile high-level tensor computation graphs directly into high-performance binaries for better efficiency, portability, and scalability than traditional operator-level libraries. Therefore, in this paper, we target the important problem of tensor compiler testing, and have proposed Tzer, a practical fuzzing technique for the widely used TVM tensor compiler. Tzer focuses on mutating the low-level Intermediate Representation (IR) for TVM due to the limited mutation space for the high-level IR. More specifically, Tzer leverages both general-purpose and tensor-compiler-specific mutators guided by coverage feedback for diverse and evolutionary IR mutation; furthermore, since tensor compilers provide various passes (i.e., transformations) for IR optimization, Tzer also performs pass mutation in tandem with IR mutation for more effective fuzzing. Our experimental results show that Tzer substantially outperforms existing fuzzing techniques on tensor compiler testing, with 75% higher coverage and 50% more valuable tests than the 2nd-best technique. Also, different components of Tzer have been validated via ablation study. To date, Tzer has detected 49 previously unknown bugs for TVM, with 37 bugs confirmed and 25 bugs fixed (PR merged). 
    more » « less
  4. Infrastructure as code (IaC) scripts, such as Ansible scripts, are used to provision computing infrastructure at scale. Existence of bugs in IaC test scripts, such as, configuration and security bugs, can be consequential for the provisioned computing infrastructure. A characterization study of bugs in IaC test scripts is the first step to understand the quality concerns that arise during testing of IaC scripts, and also provide recommendations for practitioners on quality assurance. We conduct an empirical study with 4,831 Ansible test scripts mined from 104 open source software (OSS) repositories where we quantify bug frequency, and categorize bugs in test scripts. We further categorize testing patterns, i.e., recurring coding patterns in test scripts, which also correlate with appearance of bugs. From our empirical study, we observe 1.8% of 4,831 Ansible test scripts to include a bug, and 45.2% of the 104 repositories to contain at least one test script that includes bugs. We identify 7 categories of bugs, which includes security bugs and performance bugs that are related with metadata extraction. We also identify 3 testing patterns that correlate with appearance of bugs: 'assertion roulette’, 'local only testing’, and 'remote mystery guest‘. Based on our findings, we advocate for detection and mitigation of the 3 testing patterns as these patterns can have negative implications for troubleshooting failures, reproducible deployments of software, and provisioning of computing infrastructure. 
    more » « less
  5. Simulation is widely used for validation of Register-Transfer-Level (RTL) models. While simulating with millions of random or constrained-random tests can cover majority of the functional scenarios, the number of remaining scenarios can still be huge (hundreds or thousands) in case of today's industrial designs. Hard-to-activate branches are one of the major contributors for such remaining/untested scenarios. While directed test generation techniques using formal methods are promising in activating branches, it is infeasible to apply them on large designs due to state space explosion. In this paper, we propose a fully automated and scalable approach to cover the hard-to-activate branches using concolic testing of RTL models. While application of concolic testing on hardware designs has shown some promising results in improving the overall coverage, they are not designed to activate specific targets such as uncovered corner cases and rare scenarios. This paper makes two important contributions. (1) We propose a directed test generation technique to activate a target by effective utilization of concolic testing on RTL models. (2) We develop efficient learning and clustering techniques to minimize the overlapping searches across targets to drastically reduce the overall test generation effort. 
    more » « less