skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Stabilizing Floating-Point Programs Using Provenance Analysis
Floating-point arithmetic is a loosely standardized approximation of real arithmetic available on many computers today. Architectural and compiler differences can lead to diverse calculations across platforms, for the same input. If left untreated, platform dependence, called volatility in this paper, seriously interferes with result reproducibility and, ultimately, program portability. We present an approach to stabilizing floating-point programs against volatility. Our approach, dubbed provenance analysis, traces volatility observed in a given intermediate expression E back to volatility in preceding statements, and quantifies individual contributions to the volatility in E. Statements contributing the most are then stabilized, by disambiguating the arithmetic using expression rewriting and control pragmas. The benefit of local (as opposed to program-wide) stabilization is that compilers are free to engage performance- or precision-enhancing optimizations across program fragments that do not destabilize E. We have implemented our technique in a dynamic analysis tool that reports both volatility and provenance information. We demonstrate that local program stabilization often suffices to reduce platform dependence to an acceptable level.  more » « less
Award ID(s):
1718235
PAR ID:
10057163
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Verification, Model Checking, and Abstract Interpretation
Volume:
10145
Page Range / eLocation ID:
228-245
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Floating-point types are notorious for their intricate representation. The effective use of mixed precision, i.e., using various precisions in different computations, is critical to achieve a good balance between accuracy and performance. Unfortunately, reasoning about mixed precision is difficult even for numerical experts. Techniques have been proposed to systematically search over floating-point variables and/or program instructions to find a faster, mixed-precision version of a given program. These techniques, however, are characterized by their black box nature, and face scalability limitations due to the large search space. In this paper, we exploit the community structure of floating-point variables to devise a scalable hierarchical search for precision tuning. Specifically, we perform dependence analysis and edge profiling to create a weighted dependence graph that presents a network of floating-point variables. We then formulate hierarchy construction on the network as a community detection problem, and present a hierarchical search algorithm that iteratively lowers precision with regard to communities. We implement our algorithm in the tool HiFPTuner, and show that it exhibits higher search efficiency over the state of the art for 75.9% of the experiments taking 59.6% less search time on average. Moreover, HiFPTuner finds more profitable configurations for 51.7% of the experiments, with one known to be as good as the global optimum found through exhaustive search. 
    more » « less
  2. Quantitative verification tools compute probabilities, expected rewards, or steady-state values for formal models of stochastic and timed systems. Exact results often cannot be obtained efficiently, so most tools use floating-point arithmetic in iterative algorithms that approximate the quantity of interest. Correctness is thus defined by the desired precision and determines performance. In this paper, we report on the experimental evaluation of these trade-offs performed in QComp 2020: the second friendly competition of tools for the analysis of quantitative formal models. We survey the precision guarantees—ranging from exact rational results to statistical confidence statements—offered by the nine participating tools. They gave rise to a performance evaluation using five tracks with varying correctness criteria, of which we present the results. 
    more » « less
  3. null; null (Ed.)
    Automated techniques for analyzing floating-point code for roundoff error as well as control-flow instability are of growing importance. It is important to compute rigorous estimates of roundoff error, as well as determine the extent of control-flow instability due to roundoff error flowing into conditional statements. Currently available analysis techniques are either non-rigorous or do not produce tight roundoff error bounds in many practical situations. Our approach embodied in a new tool called \seesaw employs {\em symbolic reverse-mode automatic differentiation}, smoothly handling conditionals, and offering tight error bounds. Key steps in \seesaw include weakening conditionals to accommodate roundoff error, computing a symbolic error function that depends on program paths taken, and optimizing this function whose domain may be non-rectangular by paving it with a rectangle-based cover. Our benchmarks cover many practical examples for which such rigorous analysis has hitherto not been applied, or has yielded inferior results. 
    more » « less
  4. null; null (Ed.)
    Automated techniques for analyzing floating-point code for roundoff error as well as control-flow instability are of growing importance. It is important to compute rigorous estimates of roundoff error, as well as determine the extent of control-flow instability due to roundoff error flowing into conditional statements. Currently available analysis techniques are either non-rigorous or do not produce tight roundoff error bounds in many practical situations. Our approach embodied in a new tool called \seesaw employs {\em symbolic reverse-mode automatic differentiation}, smoothly handling conditionals, and offering tight error bounds. Key steps in \seesaw include weakening conditionals to accommodate roundoff error, computing a symbolic error function that depends on program paths taken, and optimizing this function whose domain may be non-rectangular by paving it with a rectangle-based cover. Our benchmarks cover many practical examples for which such rigorous analysis has hitherto not been applied, or has yielded inferior results. 
    more » « less
  5. null (Ed.)
    This paper addresses the problem of approximating the price of options on discrete and continuous arithmetic averages of the underlying, i.e. discretely and continuously monitored Asian options, in local volatility models. A “path-integral”-type expression for option prices is obtained using a Brownian bridge representation for the transition density between consecutive sampling times and a Laplace asymptotic formula. In the limit where the sampling time window approaches zero, the option price is found to be approximated by a constrained variational problem on paths in time-price space. We refer to the optimizing path as the most-likely path (MLP). An approximation for the implied normal volatility follows accordingly. The small-time asymptotics and the existence of the MLP are also rigorously recovered using large deviation theory. 
    more » « less