Algorithms operating on real numbers are implemented as floating-point computations in practice, but floatingpoint operations introduceroundoff errorsthat can degrade the accuracy of the result. We propose , a functional programming language with a type system that can express quantitative bounds on roundoff error. Our type system combines a sensitivity analysis, enforced through a linear typing discipline, with a novel graded monad to track the accumulation of roundoff errors. We prove that our type system is sound by relating the denotational semantics of our language to the exact and floating-point operational semantics. To demonstrate our system, we instantiate with error metrics proposed in the numerical analysis literature and we show how to incorporate rounding operations that faithfully model aspects of the IEEE 754 floating-point standard. To show that can be a useful tool for automated error analysis, we develop a prototype implementation for that infers error bounds that are competitive with existing tools, while often running significantly faster. Finally, we consider semantic extensions of our graded monad to bound error under more complex rounding behaviors, such as non-deterministic and randomized rounding.
more »
« less
This content will become publicly available on June 10, 2026
Bean: A Language for Backward Error Analysis
Backward error analysisoffers a method for assessing the quality of numerical programs in the presence of floating-point rounding errors. However, techniques from the numerical analysis literature for quantifying backward error require substantial human effort, and there are currently no tools or automated methods for statically deriving sound backward error bounds. To address this gap, we propose Bean, a typed first-order programming language designed to express quantitative bounds on backward error. Bean’s type system combines a graded coeffect system with strict linearity to soundly track the flow of backward error through programs. We prove the soundness of our system using a novel categorical semantics, where every Bean program denotes a triple of related transformations that together satisfy a backward error guarantee. To illustrate Bean’s potential as a practical tool for automated backward error analysis, we implement a variety of standard algorithms from numerical linear algebra in Bean, establishing fine-grained backward error bounds via typing in a compositional style. We also develop a prototype implementation of Bean that infers backward error bounds automatically. Our evaluation shows that these inferred bounds match worst-case theoretical relative backward error bounds from the literature, underscoring Bean’s utility in validating a key property of numerical programs:numerical stability.
more »
« less
- Award ID(s):
- 2219757
- PAR ID:
- 10637290
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Programming Languages
- Volume:
- 9
- Issue:
- PLDI
- ISSN:
- 2475-1421
- Page Range / eLocation ID:
- 1838 to 1862
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper presents a program analysis method that generates program summaries involving polynomial arithmetic. Our approach builds on prior techniques that use solvable polynomial maps for summarizing loops. These techniques are able to generateallpolynomial invariants for a restricted class of programs, but cannot be applied to programs outside of this class—for instance, programs with nested loops, conditional branching, unstructured control flow, etc. There currently lacks approaches to apply these prior methods to the case of general programs. This paper bridges that gap. Instead of restricting the kinds of programs we can handle, our methodabstractsevery loop into a model that can be solved with prior techniques, bringing to bear prior work on solvable polynomial maps to general programs. While no method can generate all polynomial invariants for arbitrary programs, our method establishes its merit through amonotonictyresult. We have implemented our techniques, and tested them on a suite of benchmarks from the literature. Our experiments indicate our techniques show promise on challenging verification tasks requiring non-linear reasoning.more » « less
-
We propose a new synthesis algorithm that canefficientlysearch programs withlocalvariables (e.g., those introduced by lambdas). Prior bottom-up synthesis algorithms are not able to evaluate programs withfree local variables, and therefore cannot effectively reduce the search space of such programs (e.g., using standard observational equivalence reduction techniques), making synthesis slow. Our algorithm can reduce the space of programs with local variables. The key idea, dubbedlifted interpretation, is to lift up the program interpretation process, from evaluating one program at a time to simultaneously evaluating all programs from a grammar. Lifted interpretation provides a mechanism to systematically enumerate all binding contexts for local variables, thereby enabling us to evaluate and reduce the space of programs with local variables. Our ideas are instantiated in the domain of web automation. The resulting tool,Arborist, can automate a significantly broader range of challenging tasks more efficiently than state-of-the-art techniques includingWebRobotand Helena.more » « less
-
Machine learning (ML) applications have become an integral part of our lives. ML applications extensively use floating-point computation and involve very large/small numbers; thus, maintaining the numerical stability of such complex computations remains an important challenge. Numerical bugs can lead to system crashes, incorrect output, and wasted computing resources. In this paper, we introduce a novel idea, namelysoft assertions (SA), to encode safety/error conditions for the places where numerical instability can occur. A soft assertion is an ML model automatically trained using the dataset obtained during unit testing of unstable functions. Given the values at the unstable function in an ML application, a soft assertion reports how to change these values in order to trigger the instability. We then use the output of soft assertions as signals to effectively mutate inputs to trigger numerical instability in ML applications. In the evaluation, we used the GRIST benchmark, a total of 79 programs, as well as 15 real-world ML applications from GitHub. We compared our tool with 5 state-of-the-art (SOTA) fuzzers. We found all the GRIST bugs and outperformed the baselines. We found 13 numerical bugs in real-world code, one of which had already been confirmed by the GitHub developers. While the baselines mostly found the bugs that report NaN and INF, our tool found numerical bugs with incorrect output. We showed one case where theTumor Detection Model, trained on Brain MRI images, should have predicted ”tumor”, but instead, it incorrectly predicted ”no tumor” due to the numerical bugs. Our replication package is located at https://figshare.com/s/6528d21ccd28bea94c32.more » « less
-
This Letter presents a novel, to the best of our knowledge, method to calibrate multi-focus microscopic structured-light three-dimensional (3D) imaging systems with an electrically adjustable camera focal length. We first leverage the conventional method to calibrate the system with a reference focal lengthf0. Then we calibrate the system with other discrete focal lengthsfiby determining virtual features on a reconstructed white plane usingf0. Finally, we fit the polynomial function model using the discrete calibration results forfi. Experimental results demonstrate that our proposed method can calibrate the system consistently and accurately.more » « less
An official website of the United States government
