skip to main content


Search for: All records

Award ID contains: 1704790

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Bounded-exhaustive testing (BET), which exercises a program under test for all inputs up to some bounds, is an effective method for detecting software bugs. Systematic property-based testing is a BET approach where developers write test generation programs that describe properties of test inputs. Hybrid test generation programs offer the most expressive way to write desired properties by freely combining declarative filters and imperative generators. However, exploring hybrid test generation programs, to obtain test inputs, is both computationally demanding and challenging to parallelize. We present the first programming and execution models, dubbed Tempo, for parallel exploration of hybrid test generation programs. We describe two different strategies for mapping the computation to parallel hardware and implement them both for GPUs and CPUs. We evaluated Tempo by generating instances of various data structures commonly used for benchmarking in the BET domain. Additionally, we generated CUDA programs to stress test CUDA compilers, finding four bugs confirmed by the developers. 
    more » « less
  2. We introduce a new idea for enhancing constraint solving engines that drive many analysis and synthesis techniques that are powerful but have high complexity. Our insight is that in many cases the engines are run repeatedly against input constraints that encode problems that are related but of increasing complexity, and domain-specific knowledge can reduce the complexity. Moreover, even for one formula the engine may perform multiple expensive tasks with commonalities that can be estimated and exploited. We believe these relationships lay a foundation for making the engines more effective and scalable. We illustrate the viability of our idea in the context of a well-known solver for imperative constraints, and discuss how the idea generalizes to more general purpose methods. 
    more » « less
  3. Deep Neural Networks (DNN) are increasingly used in a variety of applications, many of them with serious safety and security concerns. This paper describes DeepCheck, a new approach for validating DNNs based on core ideas from program analysis, specifically from symbolic execution. DeepCheck implements novel techniques for lightweight symbolic analysis of DNNs and applies them to address two challenging problems in DNN analysis: 1) identification of important input features and 2) leveraging those features to create adversarial inputs. Experimental results with an MNIST image classification network and a sentiment network for textual data show that DeepCheck promises to be a valuable tool for DNN analysis. 
    more » « less