The classical universal approximation (UA) theorem for neural networks establishes mild conditions under which a feedforward neural network can approximate a continuous functionfwith arbitrary accuracy. A recent result shows that neural networks also enjoy a more generalintervaluniversal approximation (IUA) theorem, in the sense that the abstract interpretation semantics of the network using the interval domain can approximate the direct image map off(i.e., the result of applyingfto a set of inputs) with arbitrary accuracy. These theorems, however, rest on the unrealistic assumption that the neural network computes over infinitely precise real numbers, whereas their software implementations in practice compute over finite-precision floating-point numbers. An open question is whether the IUA theorem still holds in the floating-point setting. This paper introduces the first IUA theorem forfloating-pointneural networks that proves their remarkable ability toperfectly capturethe direct image map of any rounded target functionf, showing no limits exist on their expressiveness. Our IUA theorem in the floating-point setting exhibits material differences from the real-valued setting, which reflects the fundamental distinctions between these two computational models. This theorem also implies surprising corollaries, which include (i) the existence ofprovably robustfloating-point neural networks; and (ii) thecomputational completenessof the class of straight-line programs that use only floating-point additions and multiplications for the class of all floating-point programs that halt.
more »
« less
Moving the Needle on Rigorous Floating-Point Precision Tuning
Virtually all real-valued computations are carried out using floating-point data types and operations. With increasing emphasis on overall computational efficiency, compilers are increasingly attempting to optimize floating-point expressions. Practical reasoning about the correctness of these optimizations requires error analysis procedures that are rigorous (ideally, they can generate proof certificates), can handle a wide variety of operators (e.g., transcendentals), and handle all normal programmatic constructs (e.g., conditionals and loops). Unfortunately, none of today’s approaches can achieve this combination. This position paper summarizes recent progress achieved in the community on this topic. It then showcases the component techniques present within our own rigorous floating-point precision tuning framework called FPTuner—essentially offering a collection of “grab and go” tools that others can benefit from. Finally, we present FPTuner’s limitations and describe how we can exploit contemporaneous research to improve it.
more »
« less
- Award ID(s):
- 1643056
- PAR ID:
- 10070275
- Date Published:
- Journal Name:
- Kalpa Publications in Computing
- Volume:
- 5
- ISSN:
- 2515-1762
- Page Range / eLocation ID:
- 19 to 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null; null (Ed.)Automated techniques for analyzing floating-point code for roundoff error as well as control-flow instability are of growing importance. It is important to compute rigorous estimates of roundoff error, as well as determine the extent of control-flow instability due to roundoff error flowing into conditional statements. Currently available analysis techniques are either non-rigorous or do not produce tight roundoff error bounds in many practical situations. Our approach embodied in a new tool called \seesaw employs {\em symbolic reverse-mode automatic differentiation}, smoothly handling conditionals, and offering tight error bounds. Key steps in \seesaw include weakening conditionals to accommodate roundoff error, computing a symbolic error function that depends on program paths taken, and optimizing this function whose domain may be non-rectangular by paving it with a rectangle-based cover. Our benchmarks cover many practical examples for which such rigorous analysis has hitherto not been applied, or has yielded inferior results.more » « less
-
null; null (Ed.)Automated techniques for analyzing floating-point code for roundoff error as well as control-flow instability are of growing importance. It is important to compute rigorous estimates of roundoff error, as well as determine the extent of control-flow instability due to roundoff error flowing into conditional statements. Currently available analysis techniques are either non-rigorous or do not produce tight roundoff error bounds in many practical situations. Our approach embodied in a new tool called \seesaw employs {\em symbolic reverse-mode automatic differentiation}, smoothly handling conditionals, and offering tight error bounds. Key steps in \seesaw include weakening conditionals to accommodate roundoff error, computing a symbolic error function that depends on program paths taken, and optimizing this function whose domain may be non-rectangular by paving it with a rectangle-based cover. Our benchmarks cover many practical examples for which such rigorous analysis has hitherto not been applied, or has yielded inferior results.more » « less
-
Testing code for floating-point exceptions is crucial as exceptions can quickly propagate and produce unreliable numerical answers. The state-of-the-art to test for floating-point exceptions in heterogeneous systems is quite limited and solutions require the application’s source code, which precludes their use in accelerated libraries where the source is not publicly available. We present an approach to find inputs that trigger floating-point exceptions in black-box CPU or GPU functions, i.e., functions where the source code and information about input bounds are unavailable. Our approach is the first to use Bayesian optimization (BO) to identify such inputs and uses novel strategies to overcome the challenges that arise in applying BO to this problem. We implement our approach in the Xscope framework and demonstrate it on 58 functions from the CUDA Math Library and 81 functions from the Intel Math Library. Xscope is able to identify inputs that trigger exceptions in about 73% of the tested functions.more » « less
-
Though photonic computing systems offer advantages in speed, scalability, and power consumption, they often have a limited dynamic encoding range due to low signal-to-noise ratios. Compared to digital floating-point encoding, photonic fixed-point encoding limits the precision of photonic computing when applied to scientific problems. In the case of iterative algorithms such as those commonly applied in machine learning or differential equation solvers, techniques like precision decomposition and residue iteration can be applied to increase accuracy at a greater computing cost. However, the analog nature of photonic symbols allows for modulation of both amplitude and frequency, opening the possibility of encoding both the significand and exponent of floating-point values on photonic computing systems to expand the dynamic range without expending additional energy. With appropriate schema, element-wise floating-point multiplication can be performed intrinsically through the interference of light. Herein, we present a method for configurable, signed, floating-point encoding and multiplication on a limited precision photonic primitive consisting of a directly modulated Mach–Zehnder interferometer. We demonstrate this method using Newton's method to find the Golden Ratio within ±0.11%, with six-level exponent encoding for a signed trinary digit-equivalent significand, corresponding to an effective increase of 243× in the photonic primitive's dynamic range.more » « less
An official website of the United States government

