The development of sound and efficient tools that automatically perform floating-point round-off error analysis is an active area of research with applications to embedded systems and scientific computing. In this paper we describe VCFloat2, a novel extension to the VCFloat tool for verifying floating-point C programs in Coq. Like VCFloat1, VCFloat2 soundly and automatically computes round-off error bounds on floating-point expressions, but does so to higher accuracy; with better performance; with more generality for nonstandard number formats; with the ability to reason about external (user-defined or library) functions; and with improved modularity for interfacing with other program verification tools in Coq. We evaluate the performance of VCFloat2 using common benchmarks; compared to other state-of-the art tools, VCFloat2 computes competitive error bounds and transparent certificates that require less time for verification.
more »
« less
A Mixed Real and Floating-Point Solver
Reasoning about mixed real and floating-point constraints is essential for developing accurate analysis tools for floating-point programs. This paper presents FPRoCK, a prototype tool for solving mixed real and floating-point formulas. FPRoCK transforms a mixed formula into an equisatisfiable one over the reals. This formula is then solved using an off-the-shelf SMT solver. FPRoCK is also integrated with the PRECiSA static analyzer, which computes a sound estimation of the round-off error of a floating-point program. It is used to detect infeasible computational paths, thereby improving the accuracy of PRECiSA.
more »
« less
- Award ID(s):
- 1704715
- PAR ID:
- 10105491
- Date Published:
- Journal Name:
- Proceedings of the NASA Formal Methods Symposium (NFM)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Algorithms operating on real numbers are implemented as floating-point computations in practice, but floatingpoint operations introduceroundoff errorsthat can degrade the accuracy of the result. We propose , a functional programming language with a type system that can express quantitative bounds on roundoff error. Our type system combines a sensitivity analysis, enforced through a linear typing discipline, with a novel graded monad to track the accumulation of roundoff errors. We prove that our type system is sound by relating the denotational semantics of our language to the exact and floating-point operational semantics. To demonstrate our system, we instantiate with error metrics proposed in the numerical analysis literature and we show how to incorporate rounding operations that faithfully model aspects of the IEEE 754 floating-point standard. To show that can be a useful tool for automated error analysis, we develop a prototype implementation for that infers error bounds that are competitive with existing tools, while often running significantly faster. Finally, we consider semantic extensions of our graded monad to bound error under more complex rounding behaviors, such as non-deterministic and randomized rounding.more » « less
-
Floating-point types are notorious for their intricate representation. The effective use of mixed precision, i.e., using various precisions in different computations, is critical to achieve a good balance between accuracy and performance. Unfortunately, reasoning about mixed precision is difficult even for numerical experts. Techniques have been proposed to systematically search over floating-point variables and/or program instructions to find a faster, mixed-precision version of a given program. These techniques, however, are characterized by their black box nature, and face scalability limitations due to the large search space. In this paper, we exploit the community structure of floating-point variables to devise a scalable hierarchical search for precision tuning. Specifically, we perform dependence analysis and edge profiling to create a weighted dependence graph that presents a network of floating-point variables. We then formulate hierarchy construction on the network as a community detection problem, and present a hierarchical search algorithm that iteratively lowers precision with regard to communities. We implement our algorithm in the tool HiFPTuner, and show that it exhibits higher search efficiency over the state of the art for 75.9% of the experiments taking 59.6% less search time on average. Moreover, HiFPTuner finds more profitable configurations for 51.7% of the experiments, with one known to be as good as the global optimum found through exhaustive search.more » « less
-
The classical universal approximation (UA) theorem for neural networks establishes mild conditions under which a feedforward neural network can approximate a continuous functionfwith arbitrary accuracy. A recent result shows that neural networks also enjoy a more generalintervaluniversal approximation (IUA) theorem, in the sense that the abstract interpretation semantics of the network using the interval domain can approximate the direct image map off(i.e., the result of applyingfto a set of inputs) with arbitrary accuracy. These theorems, however, rest on the unrealistic assumption that the neural network computes over infinitely precise real numbers, whereas their software implementations in practice compute over finite-precision floating-point numbers. An open question is whether the IUA theorem still holds in the floating-point setting. This paper introduces the first IUA theorem forfloating-pointneural networks that proves their remarkable ability toperfectly capturethe direct image map of any rounded target functionf, showing no limits exist on their expressiveness. Our IUA theorem in the floating-point setting exhibits material differences from the real-valued setting, which reflects the fundamental distinctions between these two computational models. This theorem also implies surprising corollaries, which include (i) the existence ofprovably robustfloating-point neural networks; and (ii) thecomputational completenessof the class of straight-line programs that use only floating-point additions and multiplications for the class of all floating-point programs that halt.more » « less
-
With ever-increasing volumes of scientific floating-point data being produced by high-performance computing applications, significantly reducing scientific floating-point data size is critical, and error-controlled lossy compressors have been developed for years. None of the existing scientific floating-point lossy data compressors, however, support effective fixed-ratio lossy compression. Yet fixed-ratio lossy compression for scientific floating-point data not only compresses to the requested ratio but also respects a user-specified error bound with higher fidelity. In this paper, we present FRaZ: a generic fixed-ratio lossy compression framework respecting user-specified error constraints. The contribution is twofold. (1) We develop an efficient iterative approach to accurately determine the appropriate error settings for different lossy compressors based on target compression ratios. (2) We perform a thorough performance and accuracy evaluation for our proposed fixed-ratio compression framework with multiple state-of-the-art error-controlled lossy compressors, using several real-world scientific floating-point datasets from different domains. Experiments show that FRaZ effectively identifies the optimum error setting in the entire error setting space of any given lossy compressor. While fixed-ratio lossy compression is slower than fixed-error compression, it provides an important new lossy compression technique for users of very large scientific floating-point datasets.more » « less
An official website of the United States government

