skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Robustness Analysis of Loop-Free Floating-Point Programs via Symbolic Automatic Differentiation
Automated techniques for analyzing floating-point code for roundoff error as well as control-flow instability are of growing importance. It is important to compute rigorous estimates of roundoff error, as well as determine the extent of control-flow instability due to roundoff error flowing into conditional statements. Currently available analysis techniques are either non-rigorous or do not produce tight roundoff error bounds in many practical situations. Our approach embodied in a new tool called \seesaw employs {\em symbolic reverse-mode automatic differentiation}, smoothly handling conditionals, and offering tight error bounds. Key steps in \seesaw include weakening conditionals to accommodate roundoff error, computing a symbolic error function that depends on program paths taken, and optimizing this function whose domain may be non-rectangular by paving it with a rectangle-based cover. Our benchmarks cover many practical examples for which such rigorous analysis has hitherto not been applied, or has yielded inferior results.  more » « less
Award ID(s):
1918497
PAR ID:
10294465
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Cluster 2021
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null; null (Ed.)
    Automated techniques for analyzing floating-point code for roundoff error as well as control-flow instability are of growing importance. It is important to compute rigorous estimates of roundoff error, as well as determine the extent of control-flow instability due to roundoff error flowing into conditional statements. Currently available analysis techniques are either non-rigorous or do not produce tight roundoff error bounds in many practical situations. Our approach embodied in a new tool called \seesaw employs {\em symbolic reverse-mode automatic differentiation}, smoothly handling conditionals, and offering tight error bounds. Key steps in \seesaw include weakening conditionals to accommodate roundoff error, computing a symbolic error function that depends on program paths taken, and optimizing this function whose domain may be non-rectangular by paving it with a rectangle-based cover. Our benchmarks cover many practical examples for which such rigorous analysis has hitherto not been applied, or has yielded inferior results. 
    more » « less
  2. Melquiond, Guillaume; Tang, Ping_Tak_Peter (Ed.)
    Theorem proving demonstrates promising potential for verifying problems beyond the capabilities of SMT-solver-based verification tools. We explore and showcase the capability of Lean, an increasingly popular theorem-proving tool, in deriving the error bounds of table-based Logarithmic Number Systems (LNS). LNS reduces the number of bits needed to represent a high dynamic range of real numbers with finite precision and efficiently performs multiplication and division. However, in LNS, addition and subtraction become non-linear functions that must be approximated—typically using precomputed look-up tables. We provide the first rigorous analysis of LNS that covers first-order Taylor approximation, cotransformation techniques inspired by European Logarithmic Microprocessor, and the errors introduced by fixed-point arithmetic involved in LNS implementations. By analyzing all error sources and deriving symbolic error bounds for each, then accumulating these to obtain the final error bound, we prove the correctness of these bounds using Lean and its Mathlib library. We empirically validate our analysis using an exhaustive Python implementation, demonstrating that our analytical interpolation bounds are tight, and our analytical cotransformation bounds overestimate between one and two bits. 
    more » « less
  3. Virtually all real-valued computations are carried out using floating-point data types and operations. With increasing emphasis on overall computational efficiency, compilers are increasingly attempting to optimize floating-point expressions. Practical reasoning about the correctness of these optimizations requires error analysis procedures that are rigorous (ideally, they can generate proof certificates), can handle a wide variety of operators (e.g., transcendentals), and handle all normal programmatic constructs (e.g., conditionals and loops). Unfortunately, none of today’s approaches can achieve this combination. This position paper summarizes recent progress achieved in the community on this topic. It then showcases the component techniques present within our own rigorous floating-point precision tuning framework called FPTuner—essentially offering a collection of “grab and go” tools that others can benefit from. Finally, we present FPTuner’s limitations and describe how we can exploit contemporaneous research to improve it. 
    more » « less
  4. Algorithms operating on real numbers are implemented as floating-point computations in practice, but floatingpoint operations introduceroundoff errorsthat can degrade the accuracy of the result. We propose Λ num , a functional programming language with a type system that can express quantitative bounds on roundoff error. Our type system combines a sensitivity analysis, enforced through a linear typing discipline, with a novel graded monad to track the accumulation of roundoff errors. We prove that our type system is sound by relating the denotational semantics of our language to the exact and floating-point operational semantics. To demonstrate our system, we instantiate Λ num with error metrics proposed in the numerical analysis literature and we show how to incorporate rounding operations that faithfully model aspects of the IEEE 754 floating-point standard. To show that Λ num can be a useful tool for automated error analysis, we develop a prototype implementation for Λ num that infers error bounds that are competitive with existing tools, while often running significantly faster. Finally, we consider semantic extensions of our graded monad to bound error under more complex rounding behaviors, such as non-deterministic and randomized rounding. 
    more » « less
  5. While standard statistical inference techniques and machine learning generalization bounds assume that tests are run on data selected independently of the hypotheses, practical data analysis and machine learning are usually iterative and adaptive processes where the same holdout data is often used for testing a sequence of hypotheses (or models), which may each depend on the outcome of the previous tests on the same data. In this work, we present RADABOUND a rigorous, efficient and practical procedure for controlling the generalization error when using a holdout sample for multiple adaptive testing. Our solution is based on a new application of the Rademacher Complexity generalization bounds, adapted to dependent tests. We demonstrate the statistical power and practicality of our method through extensive simulations and comparisons to alternative approaches. In particular, we show that our rigorous solution is a substantially more powerful and efficient than the differential privacy based approach proposed in Dwork et al. [1]-[3]. 
    more » « less